Simple Backup Solution

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP








up vote
6
down vote

favorite












I'm looking for a very basic backup script/package for a directory on my Ubuntu server. Currently I'm using a cronjob like this:



0 5 * * 1 sudo tar -Pzcf /var/backups/home.tgz /home/


But I want a solution which adds a timestamp to the filename and does not override old backups. Of course this will slowly flood my drive so old backups (e.g. older than 2 months) need to be deleted automatically.



Cheers,
Dennis




UPDATE: I've decided to give the bounty to the logrotate-solution because of it's simplicity. But big thanks to all other answerers, too!







share|improve this question






















  • Have you considered rsync (downloadable from the Ubuntu Software Centre)?
    – Graham
    Apr 26 at 8:56










  • @Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
    – wottpal
    Apr 27 at 9:31











  • While ago I've wrote an answer of similar question: How do I create custom backup script?
    – pa4080
    Apr 29 at 11:29







  • 2




    what about rsnapshot which is based on rsync and intended for backups (with automatically deletion of old backups etc.)...?
    – DJCrashdummy
    Apr 29 at 11:39














up vote
6
down vote

favorite












I'm looking for a very basic backup script/package for a directory on my Ubuntu server. Currently I'm using a cronjob like this:



0 5 * * 1 sudo tar -Pzcf /var/backups/home.tgz /home/


But I want a solution which adds a timestamp to the filename and does not override old backups. Of course this will slowly flood my drive so old backups (e.g. older than 2 months) need to be deleted automatically.



Cheers,
Dennis




UPDATE: I've decided to give the bounty to the logrotate-solution because of it's simplicity. But big thanks to all other answerers, too!







share|improve this question






















  • Have you considered rsync (downloadable from the Ubuntu Software Centre)?
    – Graham
    Apr 26 at 8:56










  • @Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
    – wottpal
    Apr 27 at 9:31











  • While ago I've wrote an answer of similar question: How do I create custom backup script?
    – pa4080
    Apr 29 at 11:29







  • 2




    what about rsnapshot which is based on rsync and intended for backups (with automatically deletion of old backups etc.)...?
    – DJCrashdummy
    Apr 29 at 11:39












up vote
6
down vote

favorite









up vote
6
down vote

favorite











I'm looking for a very basic backup script/package for a directory on my Ubuntu server. Currently I'm using a cronjob like this:



0 5 * * 1 sudo tar -Pzcf /var/backups/home.tgz /home/


But I want a solution which adds a timestamp to the filename and does not override old backups. Of course this will slowly flood my drive so old backups (e.g. older than 2 months) need to be deleted automatically.



Cheers,
Dennis




UPDATE: I've decided to give the bounty to the logrotate-solution because of it's simplicity. But big thanks to all other answerers, too!







share|improve this question














I'm looking for a very basic backup script/package for a directory on my Ubuntu server. Currently I'm using a cronjob like this:



0 5 * * 1 sudo tar -Pzcf /var/backups/home.tgz /home/


But I want a solution which adds a timestamp to the filename and does not override old backups. Of course this will slowly flood my drive so old backups (e.g. older than 2 months) need to be deleted automatically.



Cheers,
Dennis




UPDATE: I've decided to give the bounty to the logrotate-solution because of it's simplicity. But big thanks to all other answerers, too!









share|improve this question













share|improve this question




share|improve this question








edited May 2 at 10:30

























asked Apr 26 at 8:51









wottpal

836




836











  • Have you considered rsync (downloadable from the Ubuntu Software Centre)?
    – Graham
    Apr 26 at 8:56










  • @Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
    – wottpal
    Apr 27 at 9:31











  • While ago I've wrote an answer of similar question: How do I create custom backup script?
    – pa4080
    Apr 29 at 11:29







  • 2




    what about rsnapshot which is based on rsync and intended for backups (with automatically deletion of old backups etc.)...?
    – DJCrashdummy
    Apr 29 at 11:39
















  • Have you considered rsync (downloadable from the Ubuntu Software Centre)?
    – Graham
    Apr 26 at 8:56










  • @Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
    – wottpal
    Apr 27 at 9:31











  • While ago I've wrote an answer of similar question: How do I create custom backup script?
    – pa4080
    Apr 29 at 11:29







  • 2




    what about rsnapshot which is based on rsync and intended for backups (with automatically deletion of old backups etc.)...?
    – DJCrashdummy
    Apr 29 at 11:39















Have you considered rsync (downloadable from the Ubuntu Software Centre)?
– Graham
Apr 26 at 8:56




Have you considered rsync (downloadable from the Ubuntu Software Centre)?
– Graham
Apr 26 at 8:56












@Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
– wottpal
Apr 27 at 9:31





@Graham Yes, but I already have remote backups and want to backup a specific directory locally. And I want to keep not only one but several snapshots.
– wottpal
Apr 27 at 9:31













While ago I've wrote an answer of similar question: How do I create custom backup script?
– pa4080
Apr 29 at 11:29





While ago I've wrote an answer of similar question: How do I create custom backup script?
– pa4080
Apr 29 at 11:29





2




2




what about rsnapshot which is based on rsync and intended for backups (with automatically deletion of old backups etc.)...?
– DJCrashdummy
Apr 29 at 11:39




what about rsnapshot which is based on rsync and intended for backups (with automatically deletion of old backups etc.)...?
– DJCrashdummy
Apr 29 at 11:39










4 Answers
4






active

oldest

votes

















up vote
4
down vote



accepted
+50










Simple solution using logrotate



If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.



To do that, place the following in a file named /etc/logrotate.d/backup-home:



/var/backups/home.tgz 
weekly
rotate 8
nocompress
dateext



From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.



You can customize the timestamp using the dateformat option, see logrotate(8).



Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).






share|improve this answer




















  • +1 for the logratate idea. Unfortunately the time when cron.daily jobs are run is a bit more complicated because cron and anacron interact in some way and behave differently depending on whether anacron is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily because my system isn't up all the time and cron.daily simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily.
    – PerlDuck
    May 1 at 14:31










  • @PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
    – Sebastian Stark
    May 1 at 16:12










  • To be honest I prefer my incremental rsync solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz plus logrotate.
    – PerlDuck
    May 1 at 16:25











  • You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
    – Sebastian Stark
    May 2 at 5:12










  • Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
    – Sebastian Stark
    May 2 at 5:14

















up vote
5
down vote













This is (a variant of) the script I use (/home/pduck/bup.sh):





#!/usr/bin/env bash

src_dir=/home/pduck
tgt_dir=/tmp/my-backups
mkdir -p $tgt_dir

# current backup directory, e.g. "2017-04-29T13:04:50";
now=$(date +%FT%H:%M:%S)

# previous backup directory
prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);

if [ -z "$prev" ]; then
# initial backup
rsync -av --delete $src_dir $tgt_dir/$now/
else
# incremental backup
rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
fi

exit 0;


It uses rsync to locally copy the files from my home directory to a backup location, /tmp/my-backups in my case.
Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42 and below that directory the backup of that day is placed.



When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42 (it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/ switch to point to the previous backup.



This is the actual point of making incremental backups:



With --link-dest=… rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.



When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.



Housekeeping is also very easy: Just rm -rf the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:



  • /tmp/my-backups/2018-04-29T...

  • /tmp/my-backups/2018-04-30T...

  • /tmp/my-backups/2018-05-01T...

and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).



I've put a cronjob in /etc/cron.daily that reads:



#!/bin/sh
/usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh


Name that file backup or something, chmod +x it, but omit the .sh suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info you can watch the progress via journalctl -t backupscript.



Note that this rsync solution requires the target directory to be on an ext4 filesystem because of the hardlinks.






share|improve this answer


















  • 1




    I recommend using UTC time (date -u), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is will print the date in ISO 8601 format.
    – pim
    May 1 at 13:52










  • @pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00 in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls output.
    – PerlDuck
    May 1 at 14:01







  • 2




    I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
    – pa4080
    May 1 at 17:09







  • 1




    @pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g. ~/.cache and probably some other dotdirs.
    – PerlDuck
    May 1 at 17:13






  • 1




    Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
    – pa4080
    May 8 at 7:29

















up vote
3
down vote













With a little edit to your cron command you can add a timestamp to the filename:



0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/


And as for the cleaning I found an awesome one-line script here that I adapted to your case:



find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;


You can add the above command to another cron job and it will remove backups older than 60 days. HTH






share|improve this answer



























    up vote
    3
    down vote













    Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:



    • it includes targeted /home/me/* files but skips 1 GB of /home/ files important to you used by FireFox, Chrome and other apps which I have no interest in backing up.

    • it includes important files to me but unimportant to you in /etc/cron*, /etc/system*, /lib/systemd/system-sleep, /etc/rc.local, /boot/grub, /usr/share/plymouth, /etc/apt/trusted.gpg, etc.

    • it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.

    Here is the relevant script, parts of the which you might adapt:



    #!/bin/sh
    #
    # NAME: daily-backup
    # DESC: A .tar backup file is created, emailed and removed.
    # DATE: Nov 25, 2017.
    # CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
    # PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup

    # NOTE: Backup file name contains machine name + Distro
    # Same script for user with multiple dual boot laptops
    # Single machine should remove $HOSTNAME from name
    # Single distribution should remove $Distro

    sleep 30 # Wait 30 seconds after boot

    # Running under WSL (Windows Subsystem for Ubuntu)?
    if cat /proc/version | grep Microsoft; then
    Distro="WSL"
    else
    Distro="Ubuntu"
    fi

    today=$( date +%Y-%m-%d-%A )
    /mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today



    My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:



    • Monday to Sunday (Daily backups) that get purged after 14 days

    • Sunday backups (Weekly backups) purged after 8 weeks

    • Last day of month backups (Monthly backups) purged after 18 months

    • Last day of year backups (Yearly backups) kept forever

    My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.



    If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.



    In summary:



    DAYS_TO_KEEP=60
    find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;





    share|improve this answer




















      Your Answer







      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "89"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1028321%2fsimple-backup-solution%23new-answer', 'question_page');

      );

      Post as a guest






























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      4
      down vote



      accepted
      +50










      Simple solution using logrotate



      If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.



      To do that, place the following in a file named /etc/logrotate.d/backup-home:



      /var/backups/home.tgz 
      weekly
      rotate 8
      nocompress
      dateext



      From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.



      You can customize the timestamp using the dateformat option, see logrotate(8).



      Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).






      share|improve this answer




















      • +1 for the logratate idea. Unfortunately the time when cron.daily jobs are run is a bit more complicated because cron and anacron interact in some way and behave differently depending on whether anacron is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily because my system isn't up all the time and cron.daily simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily.
        – PerlDuck
        May 1 at 14:31










      • @PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
        – Sebastian Stark
        May 1 at 16:12










      • To be honest I prefer my incremental rsync solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz plus logrotate.
        – PerlDuck
        May 1 at 16:25











      • You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
        – Sebastian Stark
        May 2 at 5:12










      • Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
        – Sebastian Stark
        May 2 at 5:14














      up vote
      4
      down vote



      accepted
      +50










      Simple solution using logrotate



      If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.



      To do that, place the following in a file named /etc/logrotate.d/backup-home:



      /var/backups/home.tgz 
      weekly
      rotate 8
      nocompress
      dateext



      From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.



      You can customize the timestamp using the dateformat option, see logrotate(8).



      Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).






      share|improve this answer




















      • +1 for the logratate idea. Unfortunately the time when cron.daily jobs are run is a bit more complicated because cron and anacron interact in some way and behave differently depending on whether anacron is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily because my system isn't up all the time and cron.daily simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily.
        – PerlDuck
        May 1 at 14:31










      • @PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
        – Sebastian Stark
        May 1 at 16:12










      • To be honest I prefer my incremental rsync solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz plus logrotate.
        – PerlDuck
        May 1 at 16:25











      • You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
        – Sebastian Stark
        May 2 at 5:12










      • Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
        – Sebastian Stark
        May 2 at 5:14












      up vote
      4
      down vote



      accepted
      +50







      up vote
      4
      down vote



      accepted
      +50




      +50




      Simple solution using logrotate



      If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.



      To do that, place the following in a file named /etc/logrotate.d/backup-home:



      /var/backups/home.tgz 
      weekly
      rotate 8
      nocompress
      dateext



      From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.



      You can customize the timestamp using the dateformat option, see logrotate(8).



      Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).






      share|improve this answer












      Simple solution using logrotate



      If you want to keep it simple and without scripting, just stay with your current cronjob and in addition configure a logrotate rule for it.



      To do that, place the following in a file named /etc/logrotate.d/backup-home:



      /var/backups/home.tgz 
      weekly
      rotate 8
      nocompress
      dateext



      From now on, each time logrotate runs (and it will normally do so every day at ~6:25am), it will check if it's suitable for rotation and, if so, rename your home.tgz to another file with a timestamp added. It will keep 8 copies of it, so you have roughly two months of history.



      You can customize the timestamp using the dateformat option, see logrotate(8).



      Because your backup job runs at 5am and logrotate runs at 6:25am you should make sure your tar backup runs well under 1h and 25m (I guess it will be much faster anyway).







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Apr 30 at 22:12









      Sebastian Stark

      4,668938




      4,668938











      • +1 for the logratate idea. Unfortunately the time when cron.daily jobs are run is a bit more complicated because cron and anacron interact in some way and behave differently depending on whether anacron is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily because my system isn't up all the time and cron.daily simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily.
        – PerlDuck
        May 1 at 14:31










      • @PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
        – Sebastian Stark
        May 1 at 16:12










      • To be honest I prefer my incremental rsync solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz plus logrotate.
        – PerlDuck
        May 1 at 16:25











      • You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
        – Sebastian Stark
        May 2 at 5:12










      • Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
        – Sebastian Stark
        May 2 at 5:14
















      • +1 for the logratate idea. Unfortunately the time when cron.daily jobs are run is a bit more complicated because cron and anacron interact in some way and behave differently depending on whether anacron is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily because my system isn't up all the time and cron.daily simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily.
        – PerlDuck
        May 1 at 14:31










      • @PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
        – Sebastian Stark
        May 1 at 16:12










      • To be honest I prefer my incremental rsync solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz plus logrotate.
        – PerlDuck
        May 1 at 16:25











      • You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
        – Sebastian Stark
        May 2 at 5:12










      • Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
        – Sebastian Stark
        May 2 at 5:14















      +1 for the logratate idea. Unfortunately the time when cron.daily jobs are run is a bit more complicated because cron and anacron interact in some way and behave differently depending on whether anacron is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily because my system isn't up all the time and cron.daily simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily.
      – PerlDuck
      May 1 at 14:31




      +1 for the logratate idea. Unfortunately the time when cron.daily jobs are run is a bit more complicated because cron and anacron interact in some way and behave differently depending on whether anacron is installed (desktop) or not (server). See here, for example. But either way: I've made very good experiences with cron.daily because my system isn't up all the time and cron.daily simply says: run it once a day if the computer is up. You can change the order of execution by renaming the files in /etc/cron.daily.
      – PerlDuck
      May 1 at 14:31












      @PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
      – Sebastian Stark
      May 1 at 16:12




      @PerlDuck yes, it is simple but not very robust. But this is what you get when not using proper backup software :)
      – Sebastian Stark
      May 1 at 16:12












      To be honest I prefer my incremental rsync solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz plus logrotate.
      – PerlDuck
      May 1 at 16:25





      To be honest I prefer my incremental rsync solution (and your approach) over any proper backup software because they tend to complicate things. Often proprietary or at least obfuscated format, files hidden somewhere in an archive or multiple archives with weird names like abf42df82de92a.001.gz.gpg and an obscure database that tells which file is where. No chance to recover the files without installing the proper backup software again to restore them. Thus I like your combination of tar.gz plus logrotate.
      – PerlDuck
      May 1 at 16:25













      You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
      – Sebastian Stark
      May 2 at 5:12




      You could run the backup as the pre- or postrotate script in logrotate, but then it would not be a minimal solution for OPs problem anymore.
      – Sebastian Stark
      May 2 at 5:12












      Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
      – Sebastian Stark
      May 2 at 5:14




      Also what I found to be the biggest problem in "roll-your-own-backup" scripts in my experience is handling of failure and unexpected situations (full disk, empty archives, bugs, logging, notifications)
      – Sebastian Stark
      May 2 at 5:14












      up vote
      5
      down vote













      This is (a variant of) the script I use (/home/pduck/bup.sh):





      #!/usr/bin/env bash

      src_dir=/home/pduck
      tgt_dir=/tmp/my-backups
      mkdir -p $tgt_dir

      # current backup directory, e.g. "2017-04-29T13:04:50";
      now=$(date +%FT%H:%M:%S)

      # previous backup directory
      prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);

      if [ -z "$prev" ]; then
      # initial backup
      rsync -av --delete $src_dir $tgt_dir/$now/
      else
      # incremental backup
      rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
      fi

      exit 0;


      It uses rsync to locally copy the files from my home directory to a backup location, /tmp/my-backups in my case.
      Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42 and below that directory the backup of that day is placed.



      When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42 (it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/ switch to point to the previous backup.



      This is the actual point of making incremental backups:



      With --link-dest=… rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.



      When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.



      Housekeeping is also very easy: Just rm -rf the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:



      • /tmp/my-backups/2018-04-29T...

      • /tmp/my-backups/2018-04-30T...

      • /tmp/my-backups/2018-05-01T...

      and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).



      I've put a cronjob in /etc/cron.daily that reads:



      #!/bin/sh
      /usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh


      Name that file backup or something, chmod +x it, but omit the .sh suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info you can watch the progress via journalctl -t backupscript.



      Note that this rsync solution requires the target directory to be on an ext4 filesystem because of the hardlinks.






      share|improve this answer


















      • 1




        I recommend using UTC time (date -u), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is will print the date in ISO 8601 format.
        – pim
        May 1 at 13:52










      • @pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00 in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls output.
        – PerlDuck
        May 1 at 14:01







      • 2




        I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
        – pa4080
        May 1 at 17:09







      • 1




        @pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g. ~/.cache and probably some other dotdirs.
        – PerlDuck
        May 1 at 17:13






      • 1




        Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
        – pa4080
        May 8 at 7:29














      up vote
      5
      down vote













      This is (a variant of) the script I use (/home/pduck/bup.sh):





      #!/usr/bin/env bash

      src_dir=/home/pduck
      tgt_dir=/tmp/my-backups
      mkdir -p $tgt_dir

      # current backup directory, e.g. "2017-04-29T13:04:50";
      now=$(date +%FT%H:%M:%S)

      # previous backup directory
      prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);

      if [ -z "$prev" ]; then
      # initial backup
      rsync -av --delete $src_dir $tgt_dir/$now/
      else
      # incremental backup
      rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
      fi

      exit 0;


      It uses rsync to locally copy the files from my home directory to a backup location, /tmp/my-backups in my case.
      Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42 and below that directory the backup of that day is placed.



      When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42 (it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/ switch to point to the previous backup.



      This is the actual point of making incremental backups:



      With --link-dest=… rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.



      When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.



      Housekeeping is also very easy: Just rm -rf the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:



      • /tmp/my-backups/2018-04-29T...

      • /tmp/my-backups/2018-04-30T...

      • /tmp/my-backups/2018-05-01T...

      and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).



      I've put a cronjob in /etc/cron.daily that reads:



      #!/bin/sh
      /usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh


      Name that file backup or something, chmod +x it, but omit the .sh suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info you can watch the progress via journalctl -t backupscript.



      Note that this rsync solution requires the target directory to be on an ext4 filesystem because of the hardlinks.






      share|improve this answer


















      • 1




        I recommend using UTC time (date -u), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is will print the date in ISO 8601 format.
        – pim
        May 1 at 13:52










      • @pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00 in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls output.
        – PerlDuck
        May 1 at 14:01







      • 2




        I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
        – pa4080
        May 1 at 17:09







      • 1




        @pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g. ~/.cache and probably some other dotdirs.
        – PerlDuck
        May 1 at 17:13






      • 1




        Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
        – pa4080
        May 8 at 7:29












      up vote
      5
      down vote










      up vote
      5
      down vote









      This is (a variant of) the script I use (/home/pduck/bup.sh):





      #!/usr/bin/env bash

      src_dir=/home/pduck
      tgt_dir=/tmp/my-backups
      mkdir -p $tgt_dir

      # current backup directory, e.g. "2017-04-29T13:04:50";
      now=$(date +%FT%H:%M:%S)

      # previous backup directory
      prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);

      if [ -z "$prev" ]; then
      # initial backup
      rsync -av --delete $src_dir $tgt_dir/$now/
      else
      # incremental backup
      rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
      fi

      exit 0;


      It uses rsync to locally copy the files from my home directory to a backup location, /tmp/my-backups in my case.
      Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42 and below that directory the backup of that day is placed.



      When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42 (it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/ switch to point to the previous backup.



      This is the actual point of making incremental backups:



      With --link-dest=… rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.



      When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.



      Housekeeping is also very easy: Just rm -rf the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:



      • /tmp/my-backups/2018-04-29T...

      • /tmp/my-backups/2018-04-30T...

      • /tmp/my-backups/2018-05-01T...

      and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).



      I've put a cronjob in /etc/cron.daily that reads:



      #!/bin/sh
      /usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh


      Name that file backup or something, chmod +x it, but omit the .sh suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info you can watch the progress via journalctl -t backupscript.



      Note that this rsync solution requires the target directory to be on an ext4 filesystem because of the hardlinks.






      share|improve this answer














      This is (a variant of) the script I use (/home/pduck/bup.sh):





      #!/usr/bin/env bash

      src_dir=/home/pduck
      tgt_dir=/tmp/my-backups
      mkdir -p $tgt_dir

      # current backup directory, e.g. "2017-04-29T13:04:50";
      now=$(date +%FT%H:%M:%S)

      # previous backup directory
      prev=$(ls $tgt_dir | grep -e '^....-..-..T..:..:..$' | tail -1);

      if [ -z "$prev" ]; then
      # initial backup
      rsync -av --delete $src_dir $tgt_dir/$now/
      else
      # incremental backup
      rsync -av --delete --link-dest=$tgt_dir/$prev/ $src_dir $tgt_dir/$now/
      fi

      exit 0;


      It uses rsync to locally copy the files from my home directory to a backup location, /tmp/my-backups in my case.
      Below that target directory a directory with the current timestamp is created, e.g. /tmp/my-backups/2018-04-29T12:49:42 and below that directory the backup of that day is placed.



      When the script is run once again, then it notices that there is already a directory /tmp/my-backups/2018-04-29T12:49:42 (it picks the "latest" directory that matches the timestamp pattern). It then executes the rsync command but this time with the --link-dest=/tmp/my-backups/2018-04-29T12:49:42/ switch to point to the previous backup.



      This is the actual point of making incremental backups:



      With --link-dest=… rsync does not copy files that were unchanged compared to the files in the link-dest directory. Instead it just creates hardlinks between the current and the previous files.



      When you run this script 10 times, you get 10 directories with the various timestamps and each holds a snapshot of the files at that time. You can browse the directories and restore the files you want.



      Housekeeping is also very easy: Just rm -rf the timestamp directory you don't want to keep. This will not remove older or newer or unchanged files, just remove (decrement) the hardlinks. For example, if you have three generations:



      • /tmp/my-backups/2018-04-29T...

      • /tmp/my-backups/2018-04-30T...

      • /tmp/my-backups/2018-05-01T...

      and delete the 2nd directory, then you just loose the snapshot of that day but the files are still in either the 1st or the 3rd directory (or both).



      I've put a cronjob in /etc/cron.daily that reads:



      #!/bin/sh
      /usr/bin/systemd-cat -t backupscript -p info /home/pduck/bup.sh


      Name that file backup or something, chmod +x it, but omit the .sh suffix (it won't be run then). Due to /usr/bin/systemd-cat -t backupscript -p info you can watch the progress via journalctl -t backupscript.



      Note that this rsync solution requires the target directory to be on an ext4 filesystem because of the hardlinks.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Apr 29 at 11:28









      pa4080

      12k52255




      12k52255










      answered Apr 29 at 11:11









      PerlDuck

      3,72511030




      3,72511030







      • 1




        I recommend using UTC time (date -u), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is will print the date in ISO 8601 format.
        – pim
        May 1 at 13:52










      • @pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00 in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls output.
        – PerlDuck
        May 1 at 14:01







      • 2




        I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
        – pa4080
        May 1 at 17:09







      • 1




        @pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g. ~/.cache and probably some other dotdirs.
        – PerlDuck
        May 1 at 17:13






      • 1




        Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
        – pa4080
        May 8 at 7:29












      • 1




        I recommend using UTC time (date -u), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is will print the date in ISO 8601 format.
        – pim
        May 1 at 13:52










      • @pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00 in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls output.
        – PerlDuck
        May 1 at 14:01







      • 2




        I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
        – pa4080
        May 1 at 17:09







      • 1




        @pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g. ~/.cache and probably some other dotdirs.
        – PerlDuck
        May 1 at 17:13






      • 1




        Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
        – pa4080
        May 8 at 7:29







      1




      1




      I recommend using UTC time (date -u), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is will print the date in ISO 8601 format.
      – pim
      May 1 at 13:52




      I recommend using UTC time (date -u), since local time can change, especially if you're in a region using daylight saving time. Also date -u -Is will print the date in ISO 8601 format.
      – pim
      May 1 at 13:52












      @pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00 in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls output.
      – PerlDuck
      May 1 at 14:01





      @pim Good catch, especially the UTC thing. And yes, we can play with the time format, omit the seconds or the time altogether. That depends on how exact you want it to be. I personally don't like the TZ suffix (+02:00 in my German case). The only important thing is that the lexical order must mirror the chronological order to simplify picking the previous directory. And yes, we shouldn't parse ls output.
      – PerlDuck
      May 1 at 14:01





      2




      2




      I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
      – pa4080
      May 1 at 17:09





      I've adopt this nice script for my needs and added an exclude option (also putted some quote marks): paste.ubuntu.com/p/NgfXMPy8pK
      – pa4080
      May 1 at 17:09





      1




      1




      @pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g. ~/.cache and probably some other dotdirs.
      – PerlDuck
      May 1 at 17:13




      @pa4080 Cool, I feel honored. :-) There are more directories you can exclude, e.g. ~/.cache and probably some other dotdirs.
      – PerlDuck
      May 1 at 17:13




      1




      1




      Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
      – pa4080
      May 8 at 7:29




      Just for the records. I've created GitHub repository where my backup scripts are available: github.com/pa4080/simple-backup-solutions
      – pa4080
      May 8 at 7:29










      up vote
      3
      down vote













      With a little edit to your cron command you can add a timestamp to the filename:



      0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/


      And as for the cleaning I found an awesome one-line script here that I adapted to your case:



      find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;


      You can add the above command to another cron job and it will remove backups older than 60 days. HTH






      share|improve this answer
























        up vote
        3
        down vote













        With a little edit to your cron command you can add a timestamp to the filename:



        0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/


        And as for the cleaning I found an awesome one-line script here that I adapted to your case:



        find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;


        You can add the above command to another cron job and it will remove backups older than 60 days. HTH






        share|improve this answer






















          up vote
          3
          down vote










          up vote
          3
          down vote









          With a little edit to your cron command you can add a timestamp to the filename:



          0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/


          And as for the cleaning I found an awesome one-line script here that I adapted to your case:



          find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;


          You can add the above command to another cron job and it will remove backups older than 60 days. HTH






          share|improve this answer












          With a little edit to your cron command you can add a timestamp to the filename:



          0 5 * * 1 sudo tar -Pzcf /var/backups/home_$(date "+%Y-%m-%d_%H-%M-%S").tgz /home/


          And as for the cleaning I found an awesome one-line script here that I adapted to your case:



          find . -type f -name 'home_*.tgz' -exec sh -c 'bcp="$1%_*"; bcp="$bcp#*_"; [ "$bcp" "<" "$(date +%F -d "60 days ago")" ] && rm "$1"' 0 ;


          You can add the above command to another cron job and it will remove backups older than 60 days. HTH







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 29 at 22:56









          Eskander Bejaoui

          1,0141619




          1,0141619




















              up vote
              3
              down vote













              Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:



              • it includes targeted /home/me/* files but skips 1 GB of /home/ files important to you used by FireFox, Chrome and other apps which I have no interest in backing up.

              • it includes important files to me but unimportant to you in /etc/cron*, /etc/system*, /lib/systemd/system-sleep, /etc/rc.local, /boot/grub, /usr/share/plymouth, /etc/apt/trusted.gpg, etc.

              • it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.

              Here is the relevant script, parts of the which you might adapt:



              #!/bin/sh
              #
              # NAME: daily-backup
              # DESC: A .tar backup file is created, emailed and removed.
              # DATE: Nov 25, 2017.
              # CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
              # PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup

              # NOTE: Backup file name contains machine name + Distro
              # Same script for user with multiple dual boot laptops
              # Single machine should remove $HOSTNAME from name
              # Single distribution should remove $Distro

              sleep 30 # Wait 30 seconds after boot

              # Running under WSL (Windows Subsystem for Ubuntu)?
              if cat /proc/version | grep Microsoft; then
              Distro="WSL"
              else
              Distro="Ubuntu"
              fi

              today=$( date +%Y-%m-%d-%A )
              /mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today



              My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:



              • Monday to Sunday (Daily backups) that get purged after 14 days

              • Sunday backups (Weekly backups) purged after 8 weeks

              • Last day of month backups (Monthly backups) purged after 18 months

              • Last day of year backups (Yearly backups) kept forever

              My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.



              If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.



              In summary:



              DAYS_TO_KEEP=60
              find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;





              share|improve this answer
























                up vote
                3
                down vote













                Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:



                • it includes targeted /home/me/* files but skips 1 GB of /home/ files important to you used by FireFox, Chrome and other apps which I have no interest in backing up.

                • it includes important files to me but unimportant to you in /etc/cron*, /etc/system*, /lib/systemd/system-sleep, /etc/rc.local, /boot/grub, /usr/share/plymouth, /etc/apt/trusted.gpg, etc.

                • it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.

                Here is the relevant script, parts of the which you might adapt:



                #!/bin/sh
                #
                # NAME: daily-backup
                # DESC: A .tar backup file is created, emailed and removed.
                # DATE: Nov 25, 2017.
                # CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
                # PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup

                # NOTE: Backup file name contains machine name + Distro
                # Same script for user with multiple dual boot laptops
                # Single machine should remove $HOSTNAME from name
                # Single distribution should remove $Distro

                sleep 30 # Wait 30 seconds after boot

                # Running under WSL (Windows Subsystem for Ubuntu)?
                if cat /proc/version | grep Microsoft; then
                Distro="WSL"
                else
                Distro="Ubuntu"
                fi

                today=$( date +%Y-%m-%d-%A )
                /mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today



                My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:



                • Monday to Sunday (Daily backups) that get purged after 14 days

                • Sunday backups (Weekly backups) purged after 8 weeks

                • Last day of month backups (Monthly backups) purged after 18 months

                • Last day of year backups (Yearly backups) kept forever

                My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.



                If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.



                In summary:



                DAYS_TO_KEEP=60
                find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;





                share|improve this answer






















                  up vote
                  3
                  down vote










                  up vote
                  3
                  down vote









                  Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:



                  • it includes targeted /home/me/* files but skips 1 GB of /home/ files important to you used by FireFox, Chrome and other apps which I have no interest in backing up.

                  • it includes important files to me but unimportant to you in /etc/cron*, /etc/system*, /lib/systemd/system-sleep, /etc/rc.local, /boot/grub, /usr/share/plymouth, /etc/apt/trusted.gpg, etc.

                  • it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.

                  Here is the relevant script, parts of the which you might adapt:



                  #!/bin/sh
                  #
                  # NAME: daily-backup
                  # DESC: A .tar backup file is created, emailed and removed.
                  # DATE: Nov 25, 2017.
                  # CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
                  # PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup

                  # NOTE: Backup file name contains machine name + Distro
                  # Same script for user with multiple dual boot laptops
                  # Single machine should remove $HOSTNAME from name
                  # Single distribution should remove $Distro

                  sleep 30 # Wait 30 seconds after boot

                  # Running under WSL (Windows Subsystem for Ubuntu)?
                  if cat /proc/version | grep Microsoft; then
                  Distro="WSL"
                  else
                  Distro="Ubuntu"
                  fi

                  today=$( date +%Y-%m-%d-%A )
                  /mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today



                  My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:



                  • Monday to Sunday (Daily backups) that get purged after 14 days

                  • Sunday backups (Weekly backups) purged after 8 weeks

                  • Last day of month backups (Monthly backups) purged after 18 months

                  • Last day of year backups (Yearly backups) kept forever

                  My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.



                  If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.



                  In summary:



                  DAYS_TO_KEEP=60
                  find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;





                  share|improve this answer












                  Here is part of a solution from my daily backup script which is called by cron: Backup Linux configuration, scripts and documents to Gmail. The full script is in appropriate because:



                  • it includes targeted /home/me/* files but skips 1 GB of /home/ files important to you used by FireFox, Chrome and other apps which I have no interest in backing up.

                  • it includes important files to me but unimportant to you in /etc/cron*, /etc/system*, /lib/systemd/system-sleep, /etc/rc.local, /boot/grub, /usr/share/plymouth, /etc/apt/trusted.gpg, etc.

                  • it emails the backup every morning to my gmail.com account for off-site backups. Your backups are not only on-site but also on the same machine.

                  Here is the relevant script, parts of the which you might adapt:



                  #!/bin/sh
                  #
                  # NAME: daily-backup
                  # DESC: A .tar backup file is created, emailed and removed.
                  # DATE: Nov 25, 2017.
                  # CALL: WSL or Ubuntu calls from /etc/cron.daily/daily-backup
                  # PARM: No parameters but /etc/ssmtp/ssmtp.conf must be setup

                  # NOTE: Backup file name contains machine name + Distro
                  # Same script for user with multiple dual boot laptops
                  # Single machine should remove $HOSTNAME from name
                  # Single distribution should remove $Distro

                  sleep 30 # Wait 30 seconds after boot

                  # Running under WSL (Windows Subsystem for Ubuntu)?
                  if cat /proc/version | grep Microsoft; then
                  Distro="WSL"
                  else
                  Distro="Ubuntu"
                  fi

                  today=$( date +%Y-%m-%d-%A )
                  /mnt/e/bin/daily-backup.sh Daily-$(hostname)-$Distro-backup-$today



                  My gmail.com is only 35% full (out of 15 GB) so my daily backups can run for awhile more before I have to delete files. But rather than an "everything older than xxx" philosophy I'll use a grandfather-father-son strategy as outlined here: Is it necessary to keep records of my backups?. In summary:



                  • Monday to Sunday (Daily backups) that get purged after 14 days

                  • Sunday backups (Weekly backups) purged after 8 weeks

                  • Last day of month backups (Monthly backups) purged after 18 months

                  • Last day of year backups (Yearly backups) kept forever

                  My purging process will be complicated by the fact I'll have to learn Python and install a Python library to manage gmail folders.



                  If you don't want generational backups and want to purge files older than 2 months this answer will help: Find not removing files in folders through bash script.



                  In summary:



                  DAYS_TO_KEEP=60
                  find $BACKUP_DIR -maxdepth 1 -mtime +"$DAYS_TO_KEEP" -exec rm -rf ;






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Apr 29 at 23:22









                  WinEunuuchs2Unix

                  35.6k759133




                  35.6k759133



























                       

                      draft saved


                      draft discarded















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1028321%2fsimple-backup-solution%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Popular posts from this blog

                      pylint3 and pip3 broken

                      Missing snmpget and snmpwalk

                      How to enroll fingerprints to Ubuntu 17.10 with VFS491