Inline upgrade of Debian Buster

@nevj :

Hi again, :wave:

I just tried to enlarge Debian´s qcow2 image by 20G so that I could give the inline upgrade another try.
It seems to have failed though. :frowning_face:

Here´s what I did:

  • qemu-img resize virtualdebian.img +20G
  • sudo modprobe nbd max_part=8
  • sudo qemu-nbd -c /dev/nbd0 virtualdebian.img
  • gparted /dev/nbd0
  • then I performed the resize action from within gparted # see protocol below
  • sudo qemu-nbd -d /dev/nbd0
  • sudo rmmod nbd

So far so good, I thought. But I was still sceptical, as gparted still showed that 20GB of unallocated space.

But here´s the protocol of what gparted did:

GParted 1.3.1

configuration --enable-libparted-dmraid --enable-online-resize

libparted 3.4

========================================

Device:	/dev/nbd0
Model:	Unknown
Serial:	
Sector size:	512
Total sectors:	104857600
 
Heads:	255
Sectors/track:	2
Cylinders:	205603
 
Partition table:	msdos
 
Partition	Type	Start	End	Flags	Partition Name	File System	Label	Mount Point
/dev/nbd0p1	Primary	2048	21606399	boot		ext4		
/dev/nbd0p2	Extended	21608446	62912511			extended		
    /dev/nbd0p5	Logical	21608448	23607295			linux-swap		
    /dev/nbd0p6	Logical	23609344	62912511			ext4		
========================================

Grow /dev/nbd0p2 from 19.70 GiB to 39.70 GiB  00:00:01    ( SUCCESS )
    	
calibrate /dev/nbd0p2  00:00:00    ( SUCCESS )
    	
path: /dev/nbd0p2 (partition)
start: 21608446
end: 62912511
size: 41304066 (19.70 GiB)
grow partition from 19.70 GiB to 39.70 GiB  00:00:01    ( SUCCESS )
    	
old start: 21608446
old end: 62912511
old size: 41304066 (19.70 GiB)
new start: 21608446
new end: 104857599
new size: 83249154 (39.70 GiB)

According to the protocol the action was successful.
I don´t quite get it. The disk size was more than 40GB before. So it should be over 60GB now.

Here´s what the Debian VM said after having started it:

rosika2@debian ~> df -h
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
udev            475M       0  475M    0% /dev
tmpfs            99M    4,8M   95M    5% /run
/dev/vda1        11G    7,7G  1,9G   81% /
tmpfs           494M    144K  494M    1% /dev/shm
tmpfs           5,0M    4,0K  5,0M    1% /run/lock
tmpfs           494M       0  494M    0% /sys/fs/cgroup
/dev/loop0      114M    114M     0  100% /snap/core/13308
/dev/loop1       62M     62M     0  100% /snap/core20/1518
/dev/loop3      102M    102M     0  100% /snap/lxd/23155
/dev/loop2      387M    387M     0  100% /snap/anbox/213
/dev/vda6        19G    973M   17G    6% /home
mount-tag        35G     26G  7,3G   78% /mnt/mymount
tmpfs            99M    4,0K   99M    1% /run/user/114
tmpfs            99M       0   99M    0% /run/user/1000

/dev/vda1 is 11GB and /dev/vda6 is 19GB in size… :thinking: .

I don´t know what might have gone wrong despite the “Success” messages provided by gparted.

Cheers from Rosika :slightly_smiling_face:

P.S.:

However this is what sfdisk says:

rosika2@debian ~> sudo sfdisk -l
Disk /dev/vda: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4f7a63d0

Device     Boot    Start       End  Sectors  Size Id Type
/dev/vda1  *        2048  21606399 21604352 10,3G 83 Linux
/dev/vda2       21608446 104857599 83249154 39,7G  5 Extended
/dev/vda5       21608448  23607295  1998848  976M 82 Linux swap / Solaris
/dev/vda6       23609344  62912511 39303168 18,8G 83 Linux

So /dev/vda2 is the one that has grown by 20GB.
But df -h doesn´t list it.

P.S.2:

Here´s what gparted says now:

Hi Rosika,
Hey, that is clever using nbd

I think you need to grow nbd0p6 … that is the linux root filesystem… it is still only 20Gb (18.74) but it says only 1.93Gb is used… is that correct? What does df say when you are in Debian?

I think what you have done is you have expanded the virtial disk, but you have not expanded the root partition into the extra free space. Does that make sense?

Sorry, I forgot to look up how to see the disk size in virt-manager. Had a busy night with Devuan.
Regards
Neville

1 Like

Hi Rosika,
It does report the size
Screenshot_2023-11-09_16-35-43
but you can not change it.
Neville

Somewhere in this forum - I reported on my “success or failure” of doing an inplace upgrade from Buster to Bookworm - it mostly worked - on my Pi Zero 2W - but I noticed it was chewing up more resources (Pi Zero 2W is blessed with 4 cores, but still only 512 MB RAM) than it was previously - I solved part of this by choosing what to start, and when, and I think revisiting zram or zswap… pretty sure I still have that Pi Zero 2W (note the “2” I wouldn’t even contemplate this on a plain single core Pi Zero or Zero W) running :

 x@anthe  ~  uptime
 18:05:12 up 42 days, 22:56,  2 users,  load average: 3.25, 3.17, 3.11
 x@anthe  ~  free -h
               total        used        free      shared  buff/cache   available
Mem:           426Mi       123Mi        99Mi       0.0Ki       203Mi       248Mi
Swap:          355Mi        54Mi       301Mi

I now have 2 Pi Zero 2W - and I can’t remember which one is which - because swapping in and out SD cards makes things like this a breeze…

No kudos to “me” - but it’s pretty impressive that it’s still running… I dread “in place” upgrade - I’m involved in a project to do feasibilty of “in place” upgrade of their mostly Red Hat, or “Red Hat based” servers to non-End-of-Life releases - DREAM ON! They won’t even allow servers internet access on port 80 or 443 - you cannot do LEAP without access to the public cloud… But we have to dot all our “i” and cross all our “t” before they will sign off on “side-by-side” migration (wasn’t this one of the main purposes for virtualizaiton in the first place?).

I’ve done x86_64 inplace upgrades of Debian (much better than Red Hat / RPM based distros) e.g. from Squeeze to Wheezy (only because Citrix XEN hypervisor only had templates for Squeeze) and it was pretty straightforward. Believe me the Debian ecosystem is much more friendly to inplace upgrades - in some cases all it means is breaking out sed and awk to change a few strings in /etc/apt/sources.list - try that in f–king Red Hat Enterprise Linux :smiley:

Devuan is actually simpler than Debian… if you compare the official inline upgrade documents.
Devuan also offers a cross-distro inline upgrade… from Debian 11 to Devuan 5. I might be brave enough to try that

If your installation is critical… clone it and upgrade the clone.

In a way, rolling releases are inline upgrades… in smaller steps. They just dont put numbers on the steps.

The most reliable distro of all for upgrades is Void Linux.
In 3 years, doing fortnightly upgrades, it has never missed a beat. Not even a warning message. Nothing. It just works.

Hi to all of you, :wave:

thanks a lot for your replies. :heart:

@nevj :

Thanks for the compliment.

Actually this was my thought as well, but it couldn´t be done yesterday. For some reason the expand option was greyed out in gparted.
/dev/vda2 was the only partition which was allowed to grow. Weird… :thinking:.

I just gave it another try, and suddenly it worked. :astonished:

The only difference between yesterday´s and today´s scenario was: I used the qemu-nbd -c command from within the folder where the qcow2 file resides, whereas I used the complete path for it today:

sudo qemu-nbd -c /dev/nbd0 /media/rosika/f14a27c2-0b49-4607-94ea-2e56bbf76fe1/für_qemu2/für_debian/virtualdebian.img

But that shouldn´t make any difference, right?

How do you know that?

I fear I haven´t done it right again:

rosika2@debian ~> df -h
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
udev            475M       0  475M    0% /dev
tmpfs            99M    4,8M   95M    5% /run
/dev/vda1        11G    7,7G  1,9G   81% /
tmpfs           494M    108K  494M    1% /dev/shm
tmpfs           5,0M    4,0K  5,0M    1% /run/lock
tmpfs           494M       0  494M    0% /sys/fs/cgroup
/dev/loop1      387M    387M     0  100% /snap/anbox/213
/dev/loop2      102M    102M     0  100% /snap/lxd/23155
/dev/loop3       62M     62M     0  100% /snap/core20/1518
/dev/loop0      114M    114M     0  100% /snap/core/13308
/dev/vda6        38G    973M   36G    3% /home
mount-tag        35G     26G  7,3G   78% /mnt/mymount
tmpfs            99M    4,0K   99M    1% /run/user/114
tmpfs            99M       0   99M    0% /run/user/1000

It worked alright, but now it´s the home partition which has grown to 36 GB.
I´d need root to get bigger for the update… :frowning_face: .

See also:

@daniel.m.tripp :

thanks for providing your intersting insights in the matter.

Many greetings to all.

Rosika :slightly_smiling_face:

P.S.:

I just checked:

/dev/nbd0p1 seems to be the root partition. I almost feared as much.
It holds Debian´s /dev, which seems to have been too small when I tried to upgrade to Bullseye (my 2nd attempt).

The problem is: I won´t be able to enlarge it, as there´s no free space directly after it.

The question remains: is my enlarged /dev/nbd0p6 of any use at all :question:

We wouldn´t be able to symlink /dev from the root partition to the enlarged partition, would we :question:
I´m sure it´s a silly question. :blush:

P.S.2:

My virt-manager doesn´t report the size. :slightly_frowning_face:

It says: “Speichergrösse Unbekannt”, i.e. “Storage size unknown”.

Hi Rosika,
You are right, nbd0p1 is the root partition, my mistake.
I thought you did not have a home partition? Did you make one?
10Gb, with 8Gb used… it makes sense now.

To expand nbd0p1, you have to move the other partitions.
Gparted can do that. It is risky… have a backup of the qcow2 file.
You shift them one at a time into free space, starting with the one adjacent to free space.
You are nearly there., dont give up.

Regards
Neville

1 Like

Hi Neville, :wave:

thanks for confirming my assessment of the situation.

It´s quite embarrassing.
I was of the opinion I had just one partition. Seems I was wrong about it all along.
I cannot remember how I did the installation at the time. It´s too far in the past now Sorry. :neutral_face:

O.K., I´d like to give it a try, especially in view of the fact I already created of copy if the qcow2 file in order to minimize any risks.

But how would I go about it :question:
I´ve never done anything like this before.

Cheers from Rosika :slightly_smiling_face:

Gparted has a move facility.
You highlight the partition you want to move,
then , I cant remember exactly, but I think under Partition there is a Resize/Move button
It brings up a popup where you can change the amount of free space before and after the partition.
Your free space is all after… you want to change it to before…so make the number of bytes before the partition as large as possible, just leaving enough space for the desired size of the partition, and no space after it.
It is quite tricky… when you change one value, it shifts the others behind your back, to show you what your setting implies.
When you get it right… press the go button.

I have done it. It just requires a bit of counter-intuitive fiddling

2 Likes

Thanks Neville,

I´m following instructions from here: GParted -- GParted - Moving Space Between Partitions .

I´m at this stage now:

It seems I have to get “unallocated” one step higher, right?

Right.
Do the pending operations first.
Then attempt another move to get the free space one step higher.
You are doing fine

1 Like

O.K., I did the pending operations. Seems I succeeded.
But the next step would prove to be difficult:

I tired to do this:

  • Select the extended partition.
  • Choose the Partition | Resize/Move menu option and a Resize/Move window is displayed.
  • Click on the left-hand side of the partition and drag it to the right so that there is no space between the outer extended partition boundary and the inner logical partition boundary.

There has to be another way.

1 Like

I dont drag it.
I set the byte numbers in the popup.
Try that

Doesn´t work, the Resize/Move button is greyed out in this case.

Seems I´m stuck now… :thinking:

Still: thanks a lot for your help, Neville.

Hang on,
Did you originally add space to nbd0 or to nbdop2?
If the free space is inside nbd0p2 you will not be able to move it up above it

Well, I the first step (to add space) was done with

qemu-img resize virtualdebian.img +20G .

How do I know where exactly the free space was added… :thinking: :question:

But it looks that way…
See screenshot in post #30

Oh, you cant tell from that.
It looks like it went into nbd0p2… ie it added it at the end

Could you perhaps try to make free space with gparted?
Not sure it can do that? No forget that.

Are you sure you cant move it up another step?
Did you try my method setting the bytes?

Maybe you need to highlight nbdop2
instead of nbd0p6
Yes that is it
I think I would start again from scratch, highlight nbd0p2, and move the whole thing down… then the free soace will be after nbdo0p1

Sorry, start again I am afraid… or maybe you can move it back down first, then move nbd0p2.

Nbd0p2 is like an extended dos partition, it has partitions inside itself

1 Like

Thanks a lot, Neville.

Yes, it didn´t work.

Well, I´ll see what I can do. Thanks so much.

Good night for now (what´s left of it) :heart:
Cheers from Rosika :slightly_smiling_face:

@nevj :

Hi Neville, :wave:

Update: Success :smiley:

… at least it seems that way.

Here´s what I did:

  • added another 10 GB to the qcow2 file:
    sudo qemu-img resize /media/rosika/f14a27c2-0b49-4607-94ea-2e56bbf76fe1/für_qemu2/für_debian/virtualdebian.img +10G

  • I saw the additional space was added to the end

  • now I could modify /dev/nbd0p2 (but only after that operation… :thinking: )
    “Click on the left-hand side of the partition and drag it to the right so that the free space is reduced by half.”:


(sorry, the image is truncated, but I guess you see it).

So unallocated is directly below the root partition.
After applying the changes:

  • now it looks like this:

  • So the root partition /dev/nbd0p1 is 22.17 GB in size now. It worked. :+1: :blush:

Now I tried to boot the Debian VM. It went well.

Here´s df -h taken from inside Debian:

rosika2@debian ~> df -h
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
udev            475M       0  475M    0% /dev
tmpfs            99M    6,1M   93M    7% /run
/dev/vda1        22G    7,7G   13G   38% /
tmpfs           494M    108K  494M    1% /dev/shm
tmpfs           5,0M    4,0K  5,0M    1% /run/lock
tmpfs           494M       0  494M    0% /sys/fs/cgroup
/dev/loop3      102M    102M     0  100% /snap/lxd/23155
/dev/loop0       62M     62M     0  100% /snap/core20/1518
/dev/loop1      114M    114M     0  100% /snap/core/13308
/dev/loop2      387M    387M     0  100% /snap/anbox/213
/dev/vda6        20G    973M   18G    6% /home
mount-tag        35G     26G  7,3G   78% /mnt/mymount
tmpfs            99M     12K   99M    1% /run/user/1000

also:

rosika2@debian ~> sudo sfdisk -l
Disk /dev/vda: 60 GiB, 64424509440 bytes, 125829120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4f7a63d0

Device     Boot    Start       End  Sectors  Size Id Type
/dev/vda1  *        2048  46493695 46491648 22,2G 83 Linux
/dev/vda2       46493696 104859647 58365952 27,9G  5 Extended
/dev/vda5       61257728  63256575  1998848  976M 82 Linux swap / Solaris
/dev/vda6       63258624 104857599 41598976 19,9G 83 Linux
[...]

Well, what about that… :smile: ?

As it is right now, I still have 10 GB of unallocates space left (according to gparted).
I´m sure I can be of some use. I´ll have to think about it.

Many thanks for your help, Neville :heart:

… and many greetings from Rosika :slightly_smiling_face: