Updated 2018-09-24 18:29:40 by dbohdan


CMcC: The idea is to set up a user on a remote machine, and backup to it using rsync and ssh.

The original I took this from was written in PERL and is called rsnapshot [1]. I wasn't happy with it because it doesn't back up to a remote machine.

One nice feature is that it keeps complete snapshots of the directories which use hard links to save space - the total consumed for all of the snapshots should be proportional to the size of one copy plus the size of the changed files.

Because it relies upon ssh, and a dedicated user on a remote machine, there's a fair bit of setup required. It also depends on remote ssh tcl, from this Wiki.

It should be suitable for cron to run periodically, and you can expect all that nice rsync performance. Remove the -P argument to rsync to quieten it down for cron deployment. This should be a command line option, of course.

It needs frills like --exclude lists, etc, to be really finished.

snapshot uses ssh, which assumes the existence of an account on the remote machine with the following qualities: the remote account's login shell is tclsh, account login is not permitted.

The user running snapshot must have keys sufficient to ssh-connect to the remote account without password and without passphrase and must also have permissions sufficient to read the files it's trying to snapshot.

Important Security Note:

Needless to say, it's an inherently risky business to be providing networked services as root, even via sudo, and it's incumbent upon you, the user, to understand what you're doing and assure yourself that it's safe.

It is imperative that you edit /etc/sudoers to restrict the range of programs the backup user can run via sudo. If you don't, then anyone with the remote user's ssh key can run anything as root.

If you want to preserve times and owners you have to ensure user is in /etc/group under group sudo and add the following to /etc/sudoers "user ALL = /usr/bin/rsync", where user is the name of your remote account (to prevent the backup remote account from running anything/everything as root :)

If you don't care about the times, uid and gid, then knock out the sudos from both of the following files.
    #!/usr/bin/env tclsh
    # snapshot - use rsync to rsync a snapshot of a directory to a remote machine
    # via rsync and ssh.
    # Usage: snapshot directory [interval]
    #        where interval is: one of hourly, daily, weekly, monthly

    source ssh.tcl
    package require ssh
    # account@machine to run the remote
    set remote user@machine
    # location on remote to store snapshots
    set root /var/backup
    # schedule of snapshots - how many for each category
    array set schedule {
        hourly 24
        daily   7
        weekly  4
        monthly 3
    if {[info exists argv0] && ($argv0 == [info script])} {
        if {[llength $argv] < 2} {
            puts stderr "Usage: [info script] <interval> <directory>\nwhere interval is: one of hourly, daily, weekly, monthly"
            exit 1
        set interval [lindex $argv 0]
        connect $remote
        # send remote our globals
        remotes [subst {
            array set schedule [list [array get schedule]]
            set root [file normalize $root]
            array get schedule
        # rotate according to schedule
        remote {
            proc rotate {interval} {
                global schedule
                global root
                if {![file exists $root]} {
                    file mkdir $root
                set stem [file join $root $interval]
                if {![file exists ${stem}.0]} {
                    # brand new
                    file mkdir ${stem}.0
                # delete oldest snapshots
                if {[file exists $stem.$schedule($interval)]} {
                    file delete -force $stem.$schedule($interval)
                # age snapshot names
                for {set i $schedule($interval)} {$i > 0} {incr i -1} {
                    if {[file exists $stem.$i]} {
                        file rename $stem.$i $stem.[expr {$i + 1}]
                # age/link files from most recent snapshot to .1
                exec /bin/cp -al $stem.0/ $stem.1/
        remote "rotate $interval"        ;# first rotate this interval's snapshot
        remote exit                ;# clean up the rotation
        set stem [file join $root $interval]
        # rsync the local dirs to the appropriate snapshot
        foreach dir [lrange $argv 1 end] {
            set dir  [file normalize $dir]
            set dest [file join ${stem}.0 [string map {/ @} $dir]]/
            exec sudo /usr/bin/rsync -a -P -S -z --delete --numeric-ids ${dir}/ ${remote}:${dest} >@stdout 2>@stderr

.tclshrc in remote user's home directory.
    # This goes into the remote user's home directory.  Note the sudo
    if {[info exists argv]} {
        if {[lindex $argv 0] == "-c"} {
            fconfigure stdin -buffering none -encoding binary -translation {binary binary}
            fconfigure stdout -buffering none -encoding binary -translation {binary binary}
            eval exec sudo [lindex $argv 1] >@stdout <@stdin 2>/tmp/snapshot.err

mzgcoco: My simple backup tool on windows platform Tackup