Using the CLI to backup or copy a Linux system

This is a joint post by @kovacslt and @nevj

and

Using the CLI to backup or copy a Linux system

There are several programs commonly used to backup and restore the root filesystem of a Linux installation. A number of them ( borg, systemback, timeshift) use rsync ‘under the hood’ . However it is possible to use rsync directly , as a CLI command. It is also possible to use tar. Some users prefer CLI , because it gives them full control over the backup and restore processes.

There are other possibilities, such as cp and dd. These are not advisable for a number of reasons. We will only investigate tar and rsync here.

Using rsync to backup an OS

There are a number of precautions to consider when using rsync to copy Linux.
We will look at the following

  • choosing the most appropriate command line options
  • using rsync with the root filesystem unmounted
  • using rsync from within the running Linux to be copied.
  • what to do if you copy (or restore) Linux to a different partition

The general form of the rsync command is

rsync <options> <source directory> < destination directory>

It is important, when copying an operating system, to choose appropriate options, so that file system features like dates, permissions, links are preserved.
I ( nevj) have always used rsync -aAXvH , but on consulting the experts here

https://superuser.com/questions/307541/copy-entire-file-osystem-hierarchy-from-one-drive-to-another

it is indicated that one should also have options ‘x’ and ‘S’, so what we should recommend is

rsync -avxHAXS  <source directory> <destination directory>

Let us have a look at the meaning of each of these options

  • -a .. all files with attributes ( permissions, dates, ..). Commonly called archive mode. It operates recursively… ie copies all subdirectories within the source.
  • -v .. verbose
  • -x .. stay on one filesystem ( ie do not follow links to other partitions)
  • -H .. preserve hard links
  • -A .. preserve ACL’s
  • -X .. preserve extended attributes
  • -S .. copy sparse files

These options are all designed to ensure the copied OS filesystem is exactly the same as the original.
The ‘x’ option will ensure that if, for example, your home directory is on a separate partition, it will not be copied if you copy ‘/’.
There could be occasions when one might wish to omit the ‘x’ option, but the others are compulsory for copying an OS.
You might encounter sparse files if your Linux uses docker containers.

Another option that may be required is --exclude. For example

rsync --exclude={/dev/*,/proc/*,/sys/*,/run/*}

will not copy the virtual filesystems dev,proc,sys, and run. They will appear in the copy as empty directories.
This might also be use to not copy mount points ( /mnt/* and /media/) or to not copy the home directory (/home/)
The curly brackets above are a bash shell construct. If you are not using bash it is necessary to list each excluded filesystem separately, ie

rsync --exclude-/dev/* --exclude=/proc/* --exclude=/sys/* --exclude=/run/*

for the above case.

Other options that may sometimes be usefile are

  • –delete .. deletes files in the destination that are not present in the source
  • –link-dest .. Hey, I dont like this, it is too complicated.
  • –filter=protect – protects files in the destination from being deletd even if --delete is used.

We shall illustrate how apply these options with a series of examples.

Example 1. Rsync an unmounted root filesystem.
If you look at the filesystem of a Linux other than the one you have currently running, you will see that the filesystems /dev, /proc, /sys, and /run contain no files.
These filesystems are populated at boot time and held in ramdisk… they are not saved to disk at shutdown.
Therefore, one can copy this unmounted root filesystem to another partition with the following steps

  • if the destination is a usb flash drive, format it to ext3 or ext4… do not use a fat32 formatted flash drive.
  • if the destination is another hard disk partition it must, of course , be formatted
  • mount the source filesystem, and the destination filesystem
  • rsync -avxHAXS <source directory> <destination directory>
  • there may be cases where you would not use -x, for example if /boot were a separate partition.

Example 2. Rsync within a running Linux system
To copy the root filesystem of the current running Linuxto a disk, you need to avoid copying /dev, /proc, /sys, and /run. So the --exclude option is needed.
You should also take precautions to prevent the system writing to the root filesystem while it is being copied

  • if it is a multi-user system, go into single user mode
  • avoid doing anything while the rsync copying is active. In particular do not do an update/upgrade, suppress any cron jobs, and avoid doing anything yourself at the GUI or terminal.

The destination directory needs to be formatted and mounted, If it is a USB flash fdrive, do not leave it as a FAT32 filesystem, format it to ext4.

The rsync command should be

rsync --exclude={/dev/*,/proc/*,/sys/*,/run/*} -avxHAXS  <source directory> <destination directory>

Again there may ba cases where you would not use -x
It may also be desirable to exclude /mnt/*,/media/*, /tmp/*

Exanple 3. Recovering the OS from an rsync’ed copy

Recovery is easy. In most cases the ‘damaged’ linux which one wishes to recover is not able to run, so one uses another linux to do the recovery with rsync.

  • start some other Linux ( eg a live usb drive)
  • mount both the source directory ( ie the copy) and the destination directory ( ie the partiton to receive the recovery)
  • issue an rsync command
rsync  -avxHAXS --delete  <source directory> <destination directory>

The --delete option allows you to restore linux over the top of a presumably damaged linux filesystem. If you delete the root filesystem or reformat the partition before restoring, --delete is not necessary.
There is no need for the ‘–exclude=’ options in a recovery, provided the pseudo-filesystems (proc, dev, sys, and run) are not present on the backup file..
Reboot and start the recovered Linux.

Trying to recover to a ‘live’ running Linux has not been tested. The following may work. At your own risk.

rsync  -avxHAXS --delete --filter=protect {/dev/*,/proc/*,/sys/*,/run/*} <source directory> <destination directory>

Here source directory would be the backup file, and destination directory would be /
The --filter=protect option will protect the /proc,… files in the live system from being written on or deleted.

Using tar to backup an OS or a filesystem

The utility “tar” is there exactly for backup of a filesystem. The name “tar” means “tape archive” and its original use was for copying a filesystem to magnetic tape. Today it is used mainly for arcchiving filesystems to disk.

General usage is

tar [options] [file] [file]...

I like to use tar to backup complex directory structures with many subdirectories, possibly having symlinks in them, etc. Such a complex thing may be a WINE prefix in a users home directory, or a home directory as a whole, but also could be the complete installed OS.
The first example I show is how I backed up a WINE prefix, so that it can be moved between computers easier.

(Fun fact, that this WINE prefix had an activated instance of MS Office 2007, which I expected to have to reactivate on the other computer after moving there, but instead the activation survived the moving.)

tar -czvpf ~/winebackup.tar.gz --directory ~ .wine 

Let’s break up this line, what it does:
-executes tar command
-passes options: -czvpf

The options are:

  • c : create archive
  • z : use gzip compression
  • v : verbose output (useful to track what it does)
  • p : preserve file permissions
  • f : specify the file for archive

~/winebackup.tar.gz will be the archived file. As it suggests, the name will be “winebackup.tar.gz” in the home directory of the current user.
--directory ~tells tar where to work. It is equivalent of changing to that dir beforehand and issueing the tar command there. Such as cd ~ ; tar -c....
.wineis the file (directory) to backup. All of its contents will be addded to the archive.

Now to “restore” the archive:

tar -xzvpf ~/winebackup.tar.gz  --directory ~

Note the options given to tar:

  • x : tells its about extracting.

All the other options mean exactly the same as above.
The filename of the archive from which the contents are extracted, has to be specified . Without specifying what to extract, tar will exctract everything. The
--directory ~
tells here again, where to work, we may restore that dir into an other directory.

Backing up the whole OS works similarly, there are some caveats.
First, avoid archiving the resulting archive file itself.
Say one would like to put the system archive into to root, such as /os_backup.tar.gz, it is best to tell tar to exclude it:

--exclude=/os_backup.tar.gz

Such a command line to backup the whole OS would look like:

tar --exclude="/mnt" --exclude="/os_backup.tar.gz" --exclude="/proc" --exclude="/sys" --exclude="/media" --exclude="/mnt" --exclude="/run", --exclude="/tmp" --exclude="/var/cache" --exclude="/var/lib/udisks2" --exclude="/var/run" --exclude="/var/tmp" -czpvf /os_backup.tar.gz / 

This creates a os_backup.tar.gz file in the root, from the contents of the / (root of the filesystem). It is necessary to think about what to exclude, as my example can be incomplete. For example, if a heavily used database server is running, it is wise to avoid backing it up this way, so it would be necessary to exclude /var/lib/mysql.
I won’t say this is the best practice, but I separate backing up the system itself, and backing up the data handled by the system.
So backing up a database would be a separate process.

As for restoring an OS, it’s basically process of extracting the tar archive. It will overwrite all existing files -libraries, executables, config files, etc., on the system, thus making it functional again after an unsuccessful tinkering. However won’t remove any residues of the previous failed installation of something. So unused/unneeded libraries may be still there, unless the filesystem is cleared. Hence the rsync method is much better, for whole OS backups.

Booting a copied Linux

If you move a Linux root filesystem to another disk, there are modifications required before it can be booted in that new location.

If your Linux root filesystem copy is a tar archive, you obviously have to extract the filesystem from the archive and put it on a partiton, before you can consider booting. Other than that, boot considerations for a tar copy are the same as for an rsync copy.

  1. There must be a bootloader somewhere that can ‘see’ this Linux filesystem. In the case of an ext formatted USB flash drive, it is possible to add a bootloader to the flash drive . There are instructions for adding grub to a USB drive here
https://github.com/nevillejackson/Unix/blob/main/grub/makeusb.pdf

If the moved Linux filesystem is on an internal disk partition, the grub which boots your normal linux can be used. Run the following command in your normal Linux ( ie the one that controls grub)

update-grub

That should run os-prober and find the new Linux copy. If you then reboot, the grub menu should offer the new Linux copy, as well as other Linuxes you may have installed.

  1. The bootloader will not boot this new Linux, unless you modify its /etc/fstab file. Use the command
blkid

to get a list of the UUID’s of all partitions on your system. You may have to install blkid. Then open mount the new Linux’s filesystem and cd into it

mount -t ext4 /dev/.... /mnt/new
cd /mnt/new

then edit the file /mnt/new/etc/fstab in an editor , find the line that defines the root filesystem ( called /), delete its UUID, and copy/paste the new partition’s UUID in its place. Do the same for /boot if it is a separate partition. The other partitions (swap and /home) should be correct.

  1. The bootloader may also have problems if the file/boot/grub/grub.cfg contains references to the original root filesystem. There are 3 ways to fix this
  • mount the new filesystem as above, cd into it, and delete the file /mnt/new/boot/grub/grub.cfg`. It will boot without a grub.cfg
  • mount the new filesystem as above, cd into it, and edit the file /mnt/new/boot/g rub/grub.cfg. Change any UUID’s as in fstab, and check that the line root=... is correct.
  • boot to the grub menu, then choose the new Linux copy, type e to go into edit mode, and change the UUID and root=.. as above. Then press F10 to boot , and when it boots run update-grub and it will automatically fix the file grub.cfg in the booted newlinux.

After doing the modifications 1. 2. and 3. you should boot the Linux that controls grub and do update-grub. Then the new Linux copy should boot from the grub menu.

If you were then to use this ‘moved’ Linux for a restore, you would have to go thru all the above steps again on the restored copy. So, if you are wanting to use a Linux copy for a restore, it is best not to make it bootable.

Discussion

These procedures can also be used to permanently move an installed Linux to another partition.

Using tar or rsync without compression is the fastest way to backup an entire root filesystem. If you do this regularly, and keep copying to the same partiton, rsync will archive files incrementally, which can be really fast, but you would need to use --delete option to ensure files in the destination that are not in the source are deleted.

The general principles for snapshotting an OS are

  • find a reliable procedure that suits you and stick to it
  • if copying a 'live` system , make sure it is inactive, and exclude pseudo-filesystems
  • test your restore procedure

Acknowledgement

We would like to thank @easyt50 and @ihasama for prompting us to tackle this topic.

6 Likes

This is a huge topic. We have not been able to cover everything.
Please feel free to correct any mistakes and to contribute further on anything related to using CLI tools to backup and restore.

The .md file used to construct this topic is available here

https://github.com/nevillejackson/Unix/tree/main/cliback
5 Likes

Thank you Neville and Laszlo!! I will try both rsync (which I use at the moment on a running system) and tar to an USB stick.

One thing I noticed: update-grub is not used in all Linux distroes. I would change it to “grub-mkconfig -o /boot/grub/grub.cfg”

Also, the os-prober is disabled by default on many distroes and you need to edit the /etc/default/grub file, the line “GRUB_DISABLE_OS_PROBER=false”

Once again, thank you!

3 Likes

Thank you … there should be a note

Yes, mostly Ubuntu and derivatives.

I think we should save up feedback, and maybe do one big rewrite. I dont mind havinga a super-document with maybe 20 authors

It proved to be a more complicated exercise than I envisaged.
Doing what timeshift does with cli commands involves a lot of thought about linux filesystem, and some juggling of rsync options.

5 Likes

Thank you @nevj to coordinate and post this, I know you have put much effort into it.
I hope it helps someone at some time…

6 Likes

I am grateful for your contribution, Laszlo, and also for opening my eyes to the feasability of backing up a running Linux system. I did have a closed mind on that.

4 Likes

I would like to clone my Gentoo to a SSD!!!

1 Like

Still here with your 2nd profile? I assumed we discussed this enough. So why?

That is Daniel’s preferred profile. We are waiting for Abhishek to remove the other one.

1 Like

A Big Thank you to both @nevj and @kovacslt for all the hard work going into the detail posting of using the CLI to perform a backup of a Linux system.

It looks to be well written and nice examples provided. It will take me a little bit to read thru the whole document and even longer to try these procedures.

4 Likes

Have no clue!!!

3 Likes

No problem, you use this

rsync --exclude={/dev/*,/proc/*,/sys/*,/run/*} -avHAXS  / <destination directory>

from within the running gentoo
or

rsync -avHAXS  <source directory> <destination directory>

from a live usb.
Then fix fstab and grub.cfg

I left out the -x option because you might have /boot on a separate partition.

Moving gentoo to another machine might not go well if it is a different architecture.
Moving to another disk on same machine should be fine.

2 Likes

Great idea @nevj

My cronjob on my Pi systems - that backs them up - uses “--exclude-from=” with a list as a text file - that makes it a lot easier to read the script …
Runs at 4 am on Sunday :

root@frambo:~# crontab -l |grep -v \#
0 4 * * 0  /usr/local/bin/bk-rpi.bash > /dev/null 2>&1

I only use “-av” switch however …

root@frambo:~# grep -i EXL /usr/local/bin/bk-rpi.bash 
EXL="/usr/local/bin/elx2.txt"
rsync -av --exclude-from=$EXL / $ARSYDIR/

and “elx2.txt

root@frambo:~# grep -v \# /usr/local/bin/elx2.txt
/mnt/BUNGER00/backups
/mnt/BUNGER00/archive
/home/x/ResilioSync
/home/x/Resilio\ Sync
/home/x/.cache
/home/x/.config
/home/x/.local
/home/x/.steam
/home/x/.oh-my-zsh
/home/x/.kodi
/home/x/tmp
/proc
/sys
/dev
/mnt
/media
/var/spool
/var/tmp
/var/lib/snap
/var/lib/snapd
/boot
/run
/snap
/swapfile
/tm

One of the great things about --exclude-from list format - it even supports comments - i.e. “#” will ignore that line as you’d expect with most things… I have some redundancies in there… e.g. “/mnt” and “/mnt/BUNGER00/archive” and backups… just never got around to sorting that out… /mnt/ usually has my NAS mounted - I don’t want to backup all 12 TB of that! But I don’t want to use “x” argument - because I don’t want to have to think about crossing, or not crossing, mountpoints…

This in theory - should also work on my Ubuntu machines… But I don’t bother… The Pi systems are essentially headless servers, so I don’t need various things like ~/.config backed up - I’d probably change that if I was going to run this on a desktop Linux system… But if I was to backup my main Ubuntu desktop - which now has 21 KVM vms - I’d probably quickly run out of disk space (the backup target is a 6 TB external USB3 disk mounted on my main Pi4 “server”).

I did have some housekeeping at the end of that script - but it wasn’t working to my satisfaction… I’d really like an algorithm to use in a shell script to ONLY leave the first Sunday of each month archive in place (note: I have a separate script that tars and gzips the output to a tgz which runs at 5 am on a Sunday). I have to manually locate the folder and manually delete stuff that’s not “EOM” (I call “EOM” the first sunday of the next month).

Here’s the backups of my Pi3 that runs TVHeadEnd :

bkup-telesto-20241009.tgz
bkup-telesto-20241102.tgz
bkup-telesto-20241201.tgz
bkup-telesto-20250105.tgz
bkup-telesto-20250202.tgz
bkup-telesto-20250302.tgz
bkup-telesto-20250406.tgz
bkup-telesto-20250504.tgz
bkup-telesto-20250601.tgz
bkup-telesto-20250706.tgz
bkup-telesto-20250803.tgz
bkup-telesto-20250907.tgz
bkup-telesto-20251005.tgz
bkup-telesto-20251102.tgz
bkup-telesto-20251116.tgz
bkup-telesto-20251123.tgz
bkup-telesto-20251130.tgz
bkup-telesto-20251207.tgz
bkup-telesto-20251214.tgz
bkup-telesto-20251221.tgz
bkup-telesto-20251228.tgz
bkup-telesto-20260104.tgz

The last time I did housekeeping was late October… I’d like a housekeeping script to e.g. go along 2 months later and delete anything that’s not the first Sunday of the month… But for the moment - it’s easy enough to manually housekeep…

4 Likes

Thank you for the compliment.
The idea of being able to do it on a running Linux ( like Timeshift) belongs to @kovacslt
He converted me. …

I looked up the man page
" --exclude=PATTERN exclude files matching PATTERN
–exclude-from=FILE read exclude patterns from FILE"
Yes that tidies it up.
Another candidate for the rewrite
Your list show how complicated exclusions can get

I have a day, about every 2 months in theory. Most of it is clonezilla which is dreadfully slow. I am tending to lean towards backups without compression … it is the compression that slows clonezilla.

When we do the final cleanup of this document I want it to be something from all active forum members… so keep thinking. You have the experience. Have you done rsync on a live system?.. rsync has more than 50 options , tar has more than 100. We have only scratched the surface.

3 Likes

See the discussion in

Reply No265

Regarding ddrescue and dd.
They can copy a disk, a partition or a file… but not a filesystem or part thereof.

2 Likes

That really looks a great option!
I think in a script I’d use it together with the heredoc,

rsync -av --exclude-from=- /source/ /dest/ <<_EOF
/run
/proc
/sys
_EOF

Or something similar. I like that! :star_struck:

Thanks @nevj, actually it wasn’t my idea, just look at how systemback.sh works.
The main concern about making a snapshot of a running system is that what if a file changes during the backup?
That’s why it’s vital to think about (and learn from mistakes of course) what to backup and what not.
The system itself doesn’t change during running, except log files maybe. What changes is the data on which the system works.
So backing up all the system components are safe, of course just pay attention not to do this backup during a system upgrade :wink: The systemback.sh does such a sanity check.

[[ $1 =~ n|r ]] && {
      fuser /var/lib/{dpkg,apt/lists}/lock >/dev/null 2>&1 && error 7 $LINENO # Avoid interfere with Debian package managers
    }

I see doing a system backup as a protection against my own mistakes.
For example if I installed amdgpu pro drivers the wrong way: there’s no easy undo after it, unless having a snapshot to restore :wink:
Of course I received some surprise too, when I once restored a snapshot done by systemback.sh: all of a sudden the printer started to print. Well, there was an active job in the pool when I created the snapshot, and that was restored too :rofl:
Probably excluding /var/spool/cups would have prevented that, but I wasn’t aware of it that time.
Only couple pages wasted anyway…
So what I want to say after all, separate the system backup from the data backup as much as possible.
When I back up the contents of my Seafile server, I do it in 2 passes as follows:
First rsync the data to the backup server, but the Seafile instance is still active.
That may take quite a lot of time, maybe hours long, depending on how much data were added during the day.
Because Seafile is still running, it is theoretically possible, that something changed in the data structure during the very long backup procedure.
So in the next pass I stop Seafile, and repeat the rsync. This time it takes only couple minutes, as there are very few (if any at all) data to sync. Having the server down for couple minutes seems to me better than having it down for hours, even if it happens in an inactive timeframe, between 3 AM and 5 AM when everyone should be sleeping.
Maybe a similar approach for a really complete system backup would be a better way,
so 1st pass just rsync OS components, then stop all data manipulating/creating processes, such as mysql, postfix or whatever, and do a second (much faster) pass.
This would be easy to automate via scripts.

This is a snippet from my backup script, which runs every day:

#!/bin/sh

if [ ! -f /srv/samba/restore ]; then
##no restore, do the backup
/opt/repoupdate.sh

echo $(date +%Y%m%d-%T)  " Backup started " 

##try to mount the shares to backup to
/opt/mountbackuptarget.sh

if [ $? -eq 0 ]; then

##mount succesful, do the backup
    echo $(date +%Y%m%d-%T)  " /mnt/nfs/srv mount OK " > /var/log/backup.log 

### some backup steps deleted from here, as only Seafile is relevant  now

    echo $(date +%Y%m%d-%T)  " Saving seafile " >> /var/log/backup.log

    rsync -av --delete-before  /srv/seafile-data/ /mnt/nfs/srv/seafile-data/ >> /var/log/backup.log
    rsync -av --delete-before /srv/seahub-data/ /mnt/nfs/srv/seahub-data/ >> /var/log/backup.log
    echo $(date +%Y%m%d-%T)  " Seafile saved 1st pass " >> /var/log/backup.log 

### some other backup steps omitted from here for now as well

    /usr/sbin/service seahub stop
    /usr/sbin/service seafile stop

    /usr/sbin/service mysql stop

    echo $(date +%Y%m%d-%T)  " Mysql stopped, saving mysql data_dir " >> /var/log/backup.log 

    rsync -av --delete /srv/mysql/ /mnt/nfs/srv/mysql/ >> /var/log/backup.log
    echo $(date +%Y%m%d-%T)  " Mysql OK, start " >> /var/log/backup.log 
    /usr/sbin/service mysql start
##mysql files saved, start as soon as possibble

    echo $(date +%Y%m%d-%T)  " Mysql started, start dovecot " >> /var/log/backup.log 
    mysqldump -u root -p  <passw removed from here> -A -R -E --triggers --single-transaction  >/mnt/nfs/srv/mysqlback/mysqldump.sql
##also create a dump from the mysql, in case not all databases and/or tables should be restored
##the dump.sql can be edited later as neccessary

    echo $(date +%Y%m%d-%T)  " Starting seafile data backup 2nd pass... " >> /var/log/backup.log 

    rsync -av --delete --delete-before /srv/seafile-data/ /mnt/nfs/srv/seafile-data/ >> /var/log/backup.log
    rsync -av --delete-before /srv/seahub-data/ /mnt/nfs/srv/seahub-data/ >> /var/log/backup.log

#second pass of Seafil data backup, now seafil isn't running
#seahub data 

    echo $(date +%Y%m%d-%T)  " Seafile backup finished " >> /var/log/backup.log 
 
   echo $(date +%Y%m%d-%T)  " Start seafile " >> /var/log/backup.log 

    /usr/sbin/service seafile start
    /usr/sbin/service seahub start

    echo $(date +%Y%m%d-%T)  " Seafile started, umount /mnt/nfs/srv " >> /var/log/backup.log 

    /opt/unmountbackuptarget.sh
#unmount backup target
    echo $(date +%Y%m%d-%T)  " Umounted " >> /srv/samba/backup.log 

else echo $(date +%Y%m%d-%T)  " Remote mount failed " >> /var/log/backup.log 
#log if the mount was unsuccessful
fi

else
echo "restore flag found, not saving any backup"
#it's about to restore, if this flag is present, it has a reason, so don't do backup possibly overwriting saved healthy data with a corrupted damaged heap
fi

However, the running of this script is not triggered by a timer/cron job on the server itself, but the starting of the backup server. It just switches on at 3AM (set in BIOS), logs into this server via shh, triggers the backup script, then when done, shuts down itself.
So it may be powered on for 3 hours for a long backup, 30 minutes for a shorter one, or forever, if the restore flag is found, but that’s a diffrerent case.

6 Likes

Yes that is better than having a seperate file to keep track of

Yes, there is no harm in making a possibly corrupt backup with the system running, then fixing it with a shutdown and an incremental rsync. Clever idea.

We are definitely going to need a rewrite, to make a comprehensive reference document.
I wonder if systemback should be added too?

There are backups for various purposes

  • protection from user errors
  • allow rollback from failed updates
  • allow recovery from hardware failure
  • separate strategies for user data and system

Each may be best done in a different way.
I tend to favour simple strategies … less scope for user error.

4 Likes

@nevj and @kovacslt :

Hi Neville and László, :waving_hand:

I would like to express my agreement with what Howard (@easyt50) has said:
“A Big Thank you” to both of you. :heart:

You´ve done a wonderful job on a topic that should be of great interest to many users here in our forum.
I remember when switching over to a Linux distro on a permanent basis one of the first thoughts that eneterd my mind was how to do a decent backup of my system.
That has always been a priority to me.

I have to admit that was new to me. :blush:
I was always of the opinion that backing up and ext-formatted partition should only be done when it isn´t mounted. I think that´s what the clonezilla people had in mind.

I´m thrilled to see that backup from a running Linux system can be done after all.
It was a real eye-opener to me.

I may read through your tutorial a few times more to fully grasp it.

Thanks again for your hard work and effort, dear Neville and László. :heart:

Many greetings from Rosika :slightly_smiling_face:

6 Likes

Sure, that’s clearly true. It’s highly dangerous dd-ing a mounted partition, as it almost for sure will result in a damaged backup image.
However note, thst rsync doesn’t copy the partition, but the files in a filesystem.

4 Likes

Hi László, :waving_hand:

Thanks for pointing out the difference.
So to me rsync looks better by the minute. I´m impressed.

Cheers from Rosika :slightly_smiling_face:

3 Likes