Simple Backup Solution
up vote
6
down vote
favorite
I'm looking for a very basic backup script/package for a directory on my Ubuntu server. Currently I'm using a cronjob like this:
0 5 * * 1 sudo tar -Pzcf /var/backups/home.tgz /home/
But I want a solution which adds a timestamp to the filename and does not override old backups. Of course this will slowly flood my drive so old backups (e.g. older than 2 months) need to be deleted automatically.
Cheers,
Dennis
UPDATE: I've decided to give the bounty to the logrotate
-solution because of it's simplicity. But big thanks to all other answerers, too!
backup cron
add a comment |Â
up vote
6
down vote
favorite
I'm looking for a very basic backup script/package for a directory on my Ubuntu server. Currently I'm using a cronjob like this:
0 5 * * 1 sudo tar -Pzcf /var/backups/home.tgz /home/
But I want a solution which adds a timestamp to the filename and does not override old backups. Of course this will slowly flood my drive so old backups (e.g. older than 2 months) need to be deleted automatically.
Cheers,
Dennis
UPDATE: I've decided to give the bounty to the logrotate
-solution because of it's simplicity. But big thanks to all other answerers, too!
backup cron
Have you considered rsync (downloadable from the Ubuntu Software Centre)?
â Graham
Apr 26 at 8:56
@Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
â wottpal
Apr 27 at 9:31
While ago I've wrote an answer of similar question: How do I create custom backup script?
â pa4080
Apr 29 at 11:29
2
what aboutrsnapshot
which is based onrsync
and intended for backups (with automatically deletion of old backups etc.)...?
â DJCrashdummy
Apr 29 at 11:39
add a comment |Â
up vote
6
down vote
favorite
up vote
6
down vote
favorite
I'm looking for a very basic backup script/package for a directory on my Ubuntu server. Currently I'm using a cronjob like this:
0 5 * * 1 sudo tar -Pzcf /var/backups/home.tgz /home/
But I want a solution which adds a timestamp to the filename and does not override old backups. Of course this will slowly flood my drive so old backups (e.g. older than 2 months) need to be deleted automatically.
Cheers,
Dennis
UPDATE: I've decided to give the bounty to the logrotate
-solution because of it's simplicity. But big thanks to all other answerers, too!
backup cron
I'm looking for a very basic backup script/package for a directory on my Ubuntu server. Currently I'm using a cronjob like this:
0 5 * * 1 sudo tar -Pzcf /var/backups/home.tgz /home/
But I want a solution which adds a timestamp to the filename and does not override old backups. Of course this will slowly flood my drive so old backups (e.g. older than 2 months) need to be deleted automatically.
Cheers,
Dennis
UPDATE: I've decided to give the bounty to the logrotate
-solution because of it's simplicity. But big thanks to all other answerers, too!
backup cron
edited May 2 at 10:30
asked Apr 26 at 8:51
wottpal
836
836
Have you considered rsync (downloadable from the Ubuntu Software Centre)?
â Graham
Apr 26 at 8:56
@Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
â wottpal
Apr 27 at 9:31
While ago I've wrote an answer of similar question: How do I create custom backup script?
â pa4080
Apr 29 at 11:29
2
what aboutrsnapshot
which is based onrsync
and intended for backups (with automatically deletion of old backups etc.)...?
â DJCrashdummy
Apr 29 at 11:39
add a comment |Â
Have you considered rsync (downloadable from the Ubuntu Software Centre)?
â Graham
Apr 26 at 8:56
@Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
â wottpal
Apr 27 at 9:31
While ago I've wrote an answer of similar question: How do I create custom backup script?
â pa4080
Apr 29 at 11:29
2
what aboutrsnapshot
which is based onrsync
and intended for backups (with automatically deletion of old backups etc.)...?
â DJCrashdummy
Apr 29 at 11:39
Have you considered rsync (downloadable from the Ubuntu Software Centre)?
â Graham
Apr 26 at 8:56
Have you considered rsync (downloadable from the Ubuntu Software Centre)?
â Graham
Apr 26 at 8:56
@Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
â wottpal
Apr 27 at 9:31
@Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
â wottpal
Apr 27 at 9:31
While ago I've wrote an answer of similar question: How do I create custom backup script?
â pa4080
Apr 29 at 11:29
While ago I've wrote an answer of similar question: How do I create custom backup script?
â pa4080
Apr 29 at 11:29
2
2
what about
rsnapshot
which is based on rsync
and intended for backups (with automatically deletion of old backups etc.)...?â DJCrashdummy
Apr 29 at 11:39
what about
rsnapshot
which is based on rsync
and intended for backups (with automatically deletion of old backups etc.)...?â DJCrashdummy
Apr 29 at 11:39
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
4
down vote
accepted
Simple solution using logrotate
If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.
To do that, place the following in a file named /etc/logrotate.d/backup-home
:
/var/backups/home.tgz
weekly
rotate 8
nocompress
dateext
From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz
to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.
You can customize the timestamp using the dateformat option, see logrotate(8).
Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).
+1 for thelogratate
idea. Unfortunately the time whencron.daily
jobs are run is a bit more complicated becausecron
andanacron
interact in some way and behave differently depending on whetheranacron
is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences withcron.daily
because my system isn't up all the time andcron.daily
simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in/etc/cron.daily
.
â PerlDuck
May 1 at 14:31
@PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
â Sebastian Stark
May 1 at 16:12
To be honest I prefer my incrementalrsync
solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names likeabf42df82de92a.001.gz.gpg
and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination oftar.gz
pluslogrotate
.
â PerlDuck
May 1 at 16:25
You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
â Sebastian Stark
May 2 at 5:12
Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
â Sebastian Stark
May 2 at 5:14
add a comment |Â
up vote
5
down vote
This is (a variant of) the script I use (/home/pduck/bup.sh
):
#!/usr/bin/env bash
src_dir=/home/pduck
tgt_dir=/tmp/my-backups
mkdir -p $tgt_dir
# current backup directory, e.g. "2017-04-29T13:04:50";
now=$(date +%FT%H:%M:%S)
# previous backup directory
prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);
if [ -z "$prev" ]; then
# initial backup
rsync -av --delete $src_dir $tgt_dir/$now/
else
# incremental backup
rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
fi
exit 0;
It uses rsync
to locally copy the files from my home directory to a backup location, /tmp/my-backups
in my case.
Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42
and below that directory the backup of that day is placed.
When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42
(it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync
command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/
switch to point to the previous backup.
This is the actual point of making incremental backups:
With --link-dest=â¦
rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.
When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.
Housekeeping is also very easy: Just rm -rf
the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:
/tmp/my-backups/2018-04-29T...
/tmp/my-backups/2018-04-30T...
/tmp/my-backups/2018-05-01T...
and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).
I've put a cronjob in /etc/cron.daily
that reads:
#!/bin/sh
/usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh
Name that file backup
or something, chmod +x
it, but omit the .sh
suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info
you can watch the progress via journalctl -t backupscript
.
Note that this rsync
solution requires the target directory to be on an ext4
filesystem because of the hardlinks.
1
I recommend using UTC time (date -u
), since local time can change, especially if you're in a region using daylight saving time. Alsodate -u -Is
will print the date in ISO 8601 format.
â pim
May 1 at 13:52
@pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00
in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parsels
output.
â PerlDuck
May 1 at 14:01
2
I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
â pa4080
May 1 at 17:09
1
@pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g.~/.cache
and probably some other dotdirs.
â PerlDuck
May 1 at 17:13
1
Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
â pa4080
May 8 at 7:29
 |Â
show 1 more comment
up vote
3
down vote
With a little edit to your cron command you can add a timestamp to the filename:
0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/
And as for the cleaning I found an awesome one-line script here that I adapted to your case:
find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;
You can add the above command to another cron job and it will remove backups older than 60 days. HTH
add a comment |Â
up vote
3
down vote
Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:
- it includes targeted
/home/me/*
files but skips 1 GB of/home/
files important to you used by FireFox, Chrome and other apps which I have no interest in backing up. - it includes important files to me but unimportant to you in
/etc/cron*
,/etc/system*
,/lib/systemd/system-sleep
,/etc/rc.local
,/boot/grub
,/usr/share/plymouth
,/etc/apt/trusted.gpg
, etc. - it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.
Here is the relevant script, parts of the which you might adapt:
#!/bin/sh
#
# NAME: daily-backup
# DESC: A .tar backup file is created, emailed and removed.
# DATE: Nov 25, 2017.
# CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
# PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup
# NOTE: Backup file name contains machine name + Distro
# Same script for user with multiple dual boot laptops
# Single machine should remove $HOSTNAME from name
# Single distribution should remove $Distro
sleep 30 # Wait 30 seconds after boot
# Running under WSL (Windows Subsystem for Ubuntu)?
if cat /proc/version | grep Microsoft; then
Distro="WSL"
else
Distro="Ubuntu"
fi
today=$( date +%Y-%m-%d-%A )
/mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today
My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:
- Monday to Sunday (Daily backups) that get purged after 14 days
- Sunday backups (Weekly backups) purged after 8 weeks
- Last day of month backups (Monthly backups) purged after 18 months
- Last day of year backups (Yearly backups) kept forever
My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.
If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.
In summary:
DAYS_TO_KEEP=60
find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
4
down vote
accepted
Simple solution using logrotate
If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.
To do that, place the following in a file named /etc/logrotate.d/backup-home
:
/var/backups/home.tgz
weekly
rotate 8
nocompress
dateext
From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz
to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.
You can customize the timestamp using the dateformat option, see logrotate(8).
Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).
+1 for thelogratate
idea. Unfortunately the time whencron.daily
jobs are run is a bit more complicated becausecron
andanacron
interact in some way and behave differently depending on whetheranacron
is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences withcron.daily
because my system isn't up all the time andcron.daily
simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in/etc/cron.daily
.
â PerlDuck
May 1 at 14:31
@PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
â Sebastian Stark
May 1 at 16:12
To be honest I prefer my incrementalrsync
solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names likeabf42df82de92a.001.gz.gpg
and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination oftar.gz
pluslogrotate
.
â PerlDuck
May 1 at 16:25
You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
â Sebastian Stark
May 2 at 5:12
Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
â Sebastian Stark
May 2 at 5:14
add a comment |Â
up vote
4
down vote
accepted
Simple solution using logrotate
If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.
To do that, place the following in a file named /etc/logrotate.d/backup-home
:
/var/backups/home.tgz
weekly
rotate 8
nocompress
dateext
From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz
to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.
You can customize the timestamp using the dateformat option, see logrotate(8).
Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).
+1 for thelogratate
idea. Unfortunately the time whencron.daily
jobs are run is a bit more complicated becausecron
andanacron
interact in some way and behave differently depending on whetheranacron
is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences withcron.daily
because my system isn't up all the time andcron.daily
simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in/etc/cron.daily
.
â PerlDuck
May 1 at 14:31
@PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
â Sebastian Stark
May 1 at 16:12
To be honest I prefer my incrementalrsync
solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names likeabf42df82de92a.001.gz.gpg
and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination oftar.gz
pluslogrotate
.
â PerlDuck
May 1 at 16:25
You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
â Sebastian Stark
May 2 at 5:12
Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
â Sebastian Stark
May 2 at 5:14
add a comment |Â
up vote
4
down vote
accepted
up vote
4
down vote
accepted
Simple solution using logrotate
If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.
To do that, place the following in a file named /etc/logrotate.d/backup-home
:
/var/backups/home.tgz
weekly
rotate 8
nocompress
dateext
From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz
to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.
You can customize the timestamp using the dateformat option, see logrotate(8).
Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).
Simple solution using logrotate
If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.
To do that, place the following in a file named /etc/logrotate.d/backup-home
:
/var/backups/home.tgz
weekly
rotate 8
nocompress
dateext
From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz
to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.
You can customize the timestamp using the dateformat option, see logrotate(8).
Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).
answered Apr 30 at 22:12
Sebastian Stark
4,668938
4,668938
+1 for thelogratate
idea. Unfortunately the time whencron.daily
jobs are run is a bit more complicated becausecron
andanacron
interact in some way and behave differently depending on whetheranacron
is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences withcron.daily
because my system isn't up all the time andcron.daily
simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in/etc/cron.daily
.
â PerlDuck
May 1 at 14:31
@PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
â Sebastian Stark
May 1 at 16:12
To be honest I prefer my incrementalrsync
solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names likeabf42df82de92a.001.gz.gpg
and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination oftar.gz
pluslogrotate
.
â PerlDuck
May 1 at 16:25
You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
â Sebastian Stark
May 2 at 5:12
Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
â Sebastian Stark
May 2 at 5:14
add a comment |Â
+1 for thelogratate
idea. Unfortunately the time whencron.daily
jobs are run is a bit more complicated becausecron
andanacron
interact in some way and behave differently depending on whetheranacron
is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences withcron.daily
because my system isn't up all the time andcron.daily
simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in/etc/cron.daily
.
â PerlDuck
May 1 at 14:31
@PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
â Sebastian Stark
May 1 at 16:12
To be honest I prefer my incrementalrsync
solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names likeabf42df82de92a.001.gz.gpg
and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination oftar.gz
pluslogrotate
.
â PerlDuck
May 1 at 16:25
You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
â Sebastian Stark
May 2 at 5:12
Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
â Sebastian Stark
May 2 at 5:14
+1 for the
logratate
idea. Unfortunately the time when cron.daily
jobs are run is a bit more complicated because cron
and anacron
interact in some way and behave differently depending on whether anacron
is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily
because my system isn't up all the time and cron.daily
simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily
.â PerlDuck
May 1 at 14:31
+1 for the
logratate
idea. Unfortunately the time when cron.daily
jobs are run is a bit more complicated because cron
and anacron
interact in some way and behave differently depending on whether anacron
is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily
because my system isn't up all the time and cron.daily
simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily
.â PerlDuck
May 1 at 14:31
@PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
â Sebastian Stark
May 1 at 16:12
@PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
â Sebastian Stark
May 1 at 16:12
To be honest I prefer my incremental
rsync
solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg
and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz
plus logrotate
.â PerlDuck
May 1 at 16:25
To be honest I prefer my incremental
rsync
solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg
and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz
plus logrotate
.â PerlDuck
May 1 at 16:25
You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
â Sebastian Stark
May 2 at 5:12
You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
â Sebastian Stark
May 2 at 5:12
Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
â Sebastian Stark
May 2 at 5:14
Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
â Sebastian Stark
May 2 at 5:14
add a comment |Â
up vote
5
down vote
This is (a variant of) the script I use (/home/pduck/bup.sh
):
#!/usr/bin/env bash
src_dir=/home/pduck
tgt_dir=/tmp/my-backups
mkdir -p $tgt_dir
# current backup directory, e.g. "2017-04-29T13:04:50";
now=$(date +%FT%H:%M:%S)
# previous backup directory
prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);
if [ -z "$prev" ]; then
# initial backup
rsync -av --delete $src_dir $tgt_dir/$now/
else
# incremental backup
rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
fi
exit 0;
It uses rsync
to locally copy the files from my home directory to a backup location, /tmp/my-backups
in my case.
Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42
and below that directory the backup of that day is placed.
When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42
(it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync
command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/
switch to point to the previous backup.
This is the actual point of making incremental backups:
With --link-dest=â¦
rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.
When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.
Housekeeping is also very easy: Just rm -rf
the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:
/tmp/my-backups/2018-04-29T...
/tmp/my-backups/2018-04-30T...
/tmp/my-backups/2018-05-01T...
and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).
I've put a cronjob in /etc/cron.daily
that reads:
#!/bin/sh
/usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh
Name that file backup
or something, chmod +x
it, but omit the .sh
suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info
you can watch the progress via journalctl -t backupscript
.
Note that this rsync
solution requires the target directory to be on an ext4
filesystem because of the hardlinks.
1
I recommend using UTC time (date -u
), since local time can change, especially if you're in a region using daylight saving time. Alsodate -u -Is
will print the date in ISO 8601 format.
â pim
May 1 at 13:52
@pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00
in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parsels
output.
â PerlDuck
May 1 at 14:01
2
I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
â pa4080
May 1 at 17:09
1
@pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g.~/.cache
and probably some other dotdirs.
â PerlDuck
May 1 at 17:13
1
Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
â pa4080
May 8 at 7:29
 |Â
show 1 more comment
up vote
5
down vote
This is (a variant of) the script I use (/home/pduck/bup.sh
):
#!/usr/bin/env bash
src_dir=/home/pduck
tgt_dir=/tmp/my-backups
mkdir -p $tgt_dir
# current backup directory, e.g. "2017-04-29T13:04:50";
now=$(date +%FT%H:%M:%S)
# previous backup directory
prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);
if [ -z "$prev" ]; then
# initial backup
rsync -av --delete $src_dir $tgt_dir/$now/
else
# incremental backup
rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
fi
exit 0;
It uses rsync
to locally copy the files from my home directory to a backup location, /tmp/my-backups
in my case.
Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42
and below that directory the backup of that day is placed.
When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42
(it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync
command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/
switch to point to the previous backup.
This is the actual point of making incremental backups:
With --link-dest=â¦
rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.
When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.
Housekeeping is also very easy: Just rm -rf
the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:
/tmp/my-backups/2018-04-29T...
/tmp/my-backups/2018-04-30T...
/tmp/my-backups/2018-05-01T...
and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).
I've put a cronjob in /etc/cron.daily
that reads:
#!/bin/sh
/usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh
Name that file backup
or something, chmod +x
it, but omit the .sh
suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info
you can watch the progress via journalctl -t backupscript
.
Note that this rsync
solution requires the target directory to be on an ext4
filesystem because of the hardlinks.
1
I recommend using UTC time (date -u
), since local time can change, especially if you're in a region using daylight saving time. Alsodate -u -Is
will print the date in ISO 8601 format.
â pim
May 1 at 13:52
@pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00
in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parsels
output.
â PerlDuck
May 1 at 14:01
2
I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
â pa4080
May 1 at 17:09
1
@pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g.~/.cache
and probably some other dotdirs.
â PerlDuck
May 1 at 17:13
1
Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
â pa4080
May 8 at 7:29
 |Â
show 1 more comment
up vote
5
down vote
up vote
5
down vote
This is (a variant of) the script I use (/home/pduck/bup.sh
):
#!/usr/bin/env bash
src_dir=/home/pduck
tgt_dir=/tmp/my-backups
mkdir -p $tgt_dir
# current backup directory, e.g. "2017-04-29T13:04:50";
now=$(date +%FT%H:%M:%S)
# previous backup directory
prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);
if [ -z "$prev" ]; then
# initial backup
rsync -av --delete $src_dir $tgt_dir/$now/
else
# incremental backup
rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
fi
exit 0;
It uses rsync
to locally copy the files from my home directory to a backup location, /tmp/my-backups
in my case.
Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42
and below that directory the backup of that day is placed.
When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42
(it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync
command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/
switch to point to the previous backup.
This is the actual point of making incremental backups:
With --link-dest=â¦
rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.
When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.
Housekeeping is also very easy: Just rm -rf
the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:
/tmp/my-backups/2018-04-29T...
/tmp/my-backups/2018-04-30T...
/tmp/my-backups/2018-05-01T...
and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).
I've put a cronjob in /etc/cron.daily
that reads:
#!/bin/sh
/usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh
Name that file backup
or something, chmod +x
it, but omit the .sh
suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info
you can watch the progress via journalctl -t backupscript
.
Note that this rsync
solution requires the target directory to be on an ext4
filesystem because of the hardlinks.
This is (a variant of) the script I use (/home/pduck/bup.sh
):
#!/usr/bin/env bash
src_dir=/home/pduck
tgt_dir=/tmp/my-backups
mkdir -p $tgt_dir
# current backup directory, e.g. "2017-04-29T13:04:50";
now=$(date +%FT%H:%M:%S)
# previous backup directory
prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);
if [ -z "$prev" ]; then
# initial backup
rsync -av --delete $src_dir $tgt_dir/$now/
else
# incremental backup
rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
fi
exit 0;
It uses rsync
to locally copy the files from my home directory to a backup location, /tmp/my-backups
in my case.
Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42
and below that directory the backup of that day is placed.
When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42
(it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync
command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/
switch to point to the previous backup.
This is the actual point of making incremental backups:
With --link-dest=â¦
rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.
When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.
Housekeeping is also very easy: Just rm -rf
the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:
/tmp/my-backups/2018-04-29T...
/tmp/my-backups/2018-04-30T...
/tmp/my-backups/2018-05-01T...
and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).
I've put a cronjob in /etc/cron.daily
that reads:
#!/bin/sh
/usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh
Name that file backup
or something, chmod +x
it, but omit the .sh
suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info
you can watch the progress via journalctl -t backupscript
.
Note that this rsync
solution requires the target directory to be on an ext4
filesystem because of the hardlinks.
edited Apr 29 at 11:28
pa4080
12k52255
12k52255
answered Apr 29 at 11:11
PerlDuck
3,72511030
3,72511030
1
I recommend using UTC time (date -u
), since local time can change, especially if you're in a region using daylight saving time. Alsodate -u -Is
will print the date in ISO 8601 format.
â pim
May 1 at 13:52
@pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00
in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parsels
output.
â PerlDuck
May 1 at 14:01
2
I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
â pa4080
May 1 at 17:09
1
@pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g.~/.cache
and probably some other dotdirs.
â PerlDuck
May 1 at 17:13
1
Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
â pa4080
May 8 at 7:29
 |Â
show 1 more comment
1
I recommend using UTC time (date -u
), since local time can change, especially if you're in a region using daylight saving time. Alsodate -u -Is
will print the date in ISO 8601 format.
â pim
May 1 at 13:52
@pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00
in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parsels
output.
â PerlDuck
May 1 at 14:01
2
I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
â pa4080
May 1 at 17:09
1
@pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g.~/.cache
and probably some other dotdirs.
â PerlDuck
May 1 at 17:13
1
Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
â pa4080
May 8 at 7:29
1
1
I recommend using UTC time (
date -u
), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is
will print the date in ISO 8601 format.â pim
May 1 at 13:52
I recommend using UTC time (
date -u
), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is
will print the date in ISO 8601 format.â pim
May 1 at 13:52
@pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (
+02:00
in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls
output.â PerlDuck
May 1 at 14:01
@pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (
+02:00
in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls
output.â PerlDuck
May 1 at 14:01
2
2
I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
â pa4080
May 1 at 17:09
I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
â pa4080
May 1 at 17:09
1
1
@pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g.
~/.cache
and probably some other dotdirs.â PerlDuck
May 1 at 17:13
@pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g.
~/.cache
and probably some other dotdirs.â PerlDuck
May 1 at 17:13
1
1
Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
â pa4080
May 8 at 7:29
Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
â pa4080
May 8 at 7:29
 |Â
show 1 more comment
up vote
3
down vote
With a little edit to your cron command you can add a timestamp to the filename:
0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/
And as for the cleaning I found an awesome one-line script here that I adapted to your case:
find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;
You can add the above command to another cron job and it will remove backups older than 60 days. HTH
add a comment |Â
up vote
3
down vote
With a little edit to your cron command you can add a timestamp to the filename:
0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/
And as for the cleaning I found an awesome one-line script here that I adapted to your case:
find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;
You can add the above command to another cron job and it will remove backups older than 60 days. HTH
add a comment |Â
up vote
3
down vote
up vote
3
down vote
With a little edit to your cron command you can add a timestamp to the filename:
0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/
And as for the cleaning I found an awesome one-line script here that I adapted to your case:
find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;
You can add the above command to another cron job and it will remove backups older than 60 days. HTH
With a little edit to your cron command you can add a timestamp to the filename:
0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/
And as for the cleaning I found an awesome one-line script here that I adapted to your case:
find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;
You can add the above command to another cron job and it will remove backups older than 60 days. HTH
answered Apr 29 at 22:56
Eskander Bejaoui
1,0141619
1,0141619
add a comment |Â
add a comment |Â
up vote
3
down vote
Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:
- it includes targeted
/home/me/*
files but skips 1 GB of/home/
files important to you used by FireFox, Chrome and other apps which I have no interest in backing up. - it includes important files to me but unimportant to you in
/etc/cron*
,/etc/system*
,/lib/systemd/system-sleep
,/etc/rc.local
,/boot/grub
,/usr/share/plymouth
,/etc/apt/trusted.gpg
, etc. - it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.
Here is the relevant script, parts of the which you might adapt:
#!/bin/sh
#
# NAME: daily-backup
# DESC: A .tar backup file is created, emailed and removed.
# DATE: Nov 25, 2017.
# CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
# PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup
# NOTE: Backup file name contains machine name + Distro
# Same script for user with multiple dual boot laptops
# Single machine should remove $HOSTNAME from name
# Single distribution should remove $Distro
sleep 30 # Wait 30 seconds after boot
# Running under WSL (Windows Subsystem for Ubuntu)?
if cat /proc/version | grep Microsoft; then
Distro="WSL"
else
Distro="Ubuntu"
fi
today=$( date +%Y-%m-%d-%A )
/mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today
My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:
- Monday to Sunday (Daily backups) that get purged after 14 days
- Sunday backups (Weekly backups) purged after 8 weeks
- Last day of month backups (Monthly backups) purged after 18 months
- Last day of year backups (Yearly backups) kept forever
My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.
If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.
In summary:
DAYS_TO_KEEP=60
find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;
add a comment |Â
up vote
3
down vote
Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:
- it includes targeted
/home/me/*
files but skips 1 GB of/home/
files important to you used by FireFox, Chrome and other apps which I have no interest in backing up. - it includes important files to me but unimportant to you in
/etc/cron*
,/etc/system*
,/lib/systemd/system-sleep
,/etc/rc.local
,/boot/grub
,/usr/share/plymouth
,/etc/apt/trusted.gpg
, etc. - it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.
Here is the relevant script, parts of the which you might adapt:
#!/bin/sh
#
# NAME: daily-backup
# DESC: A .tar backup file is created, emailed and removed.
# DATE: Nov 25, 2017.
# CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
# PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup
# NOTE: Backup file name contains machine name + Distro
# Same script for user with multiple dual boot laptops
# Single machine should remove $HOSTNAME from name
# Single distribution should remove $Distro
sleep 30 # Wait 30 seconds after boot
# Running under WSL (Windows Subsystem for Ubuntu)?
if cat /proc/version | grep Microsoft; then
Distro="WSL"
else
Distro="Ubuntu"
fi
today=$( date +%Y-%m-%d-%A )
/mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today
My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:
- Monday to Sunday (Daily backups) that get purged after 14 days
- Sunday backups (Weekly backups) purged after 8 weeks
- Last day of month backups (Monthly backups) purged after 18 months
- Last day of year backups (Yearly backups) kept forever
My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.
If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.
In summary:
DAYS_TO_KEEP=60
find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:
- it includes targeted
/home/me/*
files but skips 1 GB of/home/
files important to you used by FireFox, Chrome and other apps which I have no interest in backing up. - it includes important files to me but unimportant to you in
/etc/cron*
,/etc/system*
,/lib/systemd/system-sleep
,/etc/rc.local
,/boot/grub
,/usr/share/plymouth
,/etc/apt/trusted.gpg
, etc. - it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.
Here is the relevant script, parts of the which you might adapt:
#!/bin/sh
#
# NAME: daily-backup
# DESC: A .tar backup file is created, emailed and removed.
# DATE: Nov 25, 2017.
# CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
# PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup
# NOTE: Backup file name contains machine name + Distro
# Same script for user with multiple dual boot laptops
# Single machine should remove $HOSTNAME from name
# Single distribution should remove $Distro
sleep 30 # Wait 30 seconds after boot
# Running under WSL (Windows Subsystem for Ubuntu)?
if cat /proc/version | grep Microsoft; then
Distro="WSL"
else
Distro="Ubuntu"
fi
today=$( date +%Y-%m-%d-%A )
/mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today
My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:
- Monday to Sunday (Daily backups) that get purged after 14 days
- Sunday backups (Weekly backups) purged after 8 weeks
- Last day of month backups (Monthly backups) purged after 18 months
- Last day of year backups (Yearly backups) kept forever
My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.
If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.
In summary:
DAYS_TO_KEEP=60
find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;
Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:
- it includes targeted
/home/me/*
files but skips 1 GB of/home/
files important to you used by FireFox, Chrome and other apps which I have no interest in backing up. - it includes important files to me but unimportant to you in
/etc/cron*
,/etc/system*
,/lib/systemd/system-sleep
,/etc/rc.local
,/boot/grub
,/usr/share/plymouth
,/etc/apt/trusted.gpg
, etc. - it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.
Here is the relevant script, parts of the which you might adapt:
#!/bin/sh
#
# NAME: daily-backup
# DESC: A .tar backup file is created, emailed and removed.
# DATE: Nov 25, 2017.
# CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
# PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup
# NOTE: Backup file name contains machine name + Distro
# Same script for user with multiple dual boot laptops
# Single machine should remove $HOSTNAME from name
# Single distribution should remove $Distro
sleep 30 # Wait 30 seconds after boot
# Running under WSL (Windows Subsystem for Ubuntu)?
if cat /proc/version | grep Microsoft; then
Distro="WSL"
else
Distro="Ubuntu"
fi
today=$( date +%Y-%m-%d-%A )
/mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today
My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:
- Monday to Sunday (Daily backups) that get purged after 14 days
- Sunday backups (Weekly backups) purged after 8 weeks
- Last day of month backups (Monthly backups) purged after 18 months
- Last day of year backups (Yearly backups) kept forever
My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.
If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.
In summary:
DAYS_TO_KEEP=60
find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;
answered Apr 29 at 23:22
WinEunuuchs2Unix
35.6k759133
35.6k759133
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1028321%2fsimple-backup-solution%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Have you considered rsync (downloadable from the Ubuntu Software Centre)?
â Graham
Apr 26 at 8:56
@Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
â wottpal
Apr 27 at 9:31
While ago I've wrote an answer of similar question: How do I create custom backup script?
â pa4080
Apr 29 at 11:29
2
what about
rsnapshot
which is based onrsync
and intended for backups (with automatically deletion of old backups etc.)...?â DJCrashdummy
Apr 29 at 11:39