How to create incremental backups using rsync on Linux - LinuxConfig.org

Yes, this should work as you intended, it will take a full backup the same way if you would point --link-dest to an empty directory. The subsequent incremental backups will take the last full and recent increments to consideration as long as you provide the last backup path as --link-dest.

I would also keep another full backup up until I can know for sure that the next full backup is completed successfully. My procedure would be something like: make full backup → do incremental backups → move full backup directory before taking new full → take full backup → delete old full after success.

Hi, I really like this rsync backup solution, and I also had a similar question to paqman about deleting old backups. My understanding of the hard links that the rsync incremental backups are based on, is that the actual inode that the file points to will still exist (not be deleted) if another file is link to it.

So I’m assuming that if I were to create that first full backup, then numerous incrementals, and then delete that first backup dir, the files/inodes that are still hard linked to the subsequent backups will still exist. If this is correct, I could then just continue the rolling incrementals, and purge off the older ones as they reach an age threshold, without the need for another full backup.

Am I correct in my understanding of this?

Hi GlassMan,

Welcome to our forums.

They will exist, but what about files you deleted, so they will not be in the incremental backup anymore? In that case if you loose them if you don’t have the first full backup. If the use case is something that always grows, like pictures you never delete from, the way you describe it should work safely, but there is data going in and out, you might want to reconsider your backup plan.

Hello bro, could you help me out please?

I tried to do it but doesn’t work. I don’t know so much code, so I’m struggling hahah.
I have an external server to make the backups. So I run it here to get the data from the production server.
Thank you!!

I tried with this:

set -o errexit
set -o nounset
set -o pipefail

readonly BACKUP_SVR="master_xxx@x.x.x.x"
readonly SOURCE_DIR="/home/master/commonFiles/"
readonly BACKUP_DIR="/home/oggo/master_xxx/commonFiles"
readonly DATETIME="$(date '+%Y-%m-%d_%H:%M:%S')"
readonly BACKUP_PATH="{BACKUP_DIR}/{DATETIME}"
readonly LATEST_LINK="{BACKUP_DIR}/latest"

ssh {BACKUP_SVR} "mkdir -p {BACKUP_DIR}"

rsync -ahvW --no-compress --delete \
  "{SOURCE_DIR}/" \
  --link-dest "{LATEST_LINK}" \
  --exclude=".cache" \
  "{BACKUP_SVR}:{BACKUP_PATH}"

ssh {BACKUP_SVR} "rm -rf {LATEST_LINK}"
ssh {BACKUP_SVR} "ln -s {BACKUP_PATH} ${LATEST_LINK}"

Thank you

Hi all,
Thanks for this article. I use the script since a while now to make local backups on an external harddrive. I thought it worked miraculously, but i checked today th inodes of some files common to backup number x and backup number y and they are not the same. So i guess the comparision operation of rsync --link-dest “{LATEST_LINK}” somehow failes. any idea what could cause this? can anybody confirm the script to work (inodes compared). I see a similar message further up in the thread, but there it seams to have been a problem with a (missing) ssh command.
I am using absolute paths for SOURCE_DIR and BACKUP_DIR.

It is not making a full backup every time…

It does the first time you run it, stores the files - and they have a Hard Link Count = 1

next time you run it, yes it creates a new directory and new directory entries, but as it is using --link-dest, it just create a hard link to en existing file/inode.

To see for yourself , in the first backup directory, then the 2nd,:

stat .bashrc (for example) you will see the file has (Hard) Links: 2 and the Inode number is the same in both cases. Although the file appears in both directories, it is actually the same file.
Also the 2nd time runs, it is usually much faster, because of this.

Timeshift - with the RSYNC option, uses this same --link -dest method.

Hope that helps.