Using OpenSSH with keys can facilitate secure automated backups. It's a myth that remote root access must be allowed. sudo works just fine -- if properly configured. rsync, tar, and dump are the foundation for most backup methods. Remember, that until the backup data has been tested and shown to restore reliably, it does not count as a backup copy.
Backup with rsync
rsync now defaults to using SSH. But it still can be specified explicitly:
$ rsync --exclude '*~' -avv \ -e "ssh" \ email@example.com:./archive \ /Users/fred/archive/.
For some types of data, transfer can be speeded up greatly by using rsync with compression, -z.
Rsync with keys
rsync can authenticate using SSH keys. If the key is added to an agent, then the passphrase only needs to be entered once:
$ rsync --exclude '*~' -avv \ -e "ssh -i ~/.ssh/key_rsa" \ firstname.lastname@example.org:./archive \ /Users/fred/archive/.
Backup with rsync and sudo
rsync is often used to back up locally or remotely. rsync is fast and flexible and copies incrementally so only the changes are transferred, thus avoiding wasting time re-copying what is already at the destination. It does that through use of its now famous algorithm. When working remotely, it needs a little help with the encryption and the usual practice is to tunnel it over SSH.
Preparation: create an account to use for the backup, create a pair of keys to use only for backup, then make sure you can log in to that account with ssh with and without those keys.
$ ssh -t -i ~/.ssh/mybkupkey email@example.com
Step 1: Configure sudoers and test rsync with sudo on the remote host. In this case data is staying on the remote machine.
Step 2: Test rsync with sudo over ssh.
$ ssh -l bkupacct www.example.org sudo rsync -av:/var/www/ /tmp/
It will be necessary to tune /etc/sudoers a little at this stage. More refinements may come later. Note that there is an rsync user and an ssh user. The data in this case gets copied from the remote machine to the local /tmp.
$ rsync -e "ssh -t -l bkupacct" --rsync-path='sudo rsync' \ -av firstname.lastname@example.org:/var/www/ /tmp/
Step 3: Use the key.
$ rsync -e "ssh -i ~/.ssh/key -t -l bkupacct" --rsync-path='sudo rsync' \ -av email@example.com:/var/www/ /tmp/
Step 4: Adjust /etc/sudoers so that the backup account has enough access to run rsync but only in the directories it is supposed to run in and without free-rein on the system. Use the first debugging level to see the actual parameters getting passed to the remote host. That provides the basis of what /etc/sudoers will need:
$ rsync -e "ssh -t -v" --rsync-path='sudo rsync' \ -av firstname.lastname@example.org:/var/www/ /tmp/ ... debug1: Sending command: sudo rsync --server --sender -e.iLs . /var/www ...
Be sure that the backed up data is not accessible to others. At this point you are done. However the process can be automated much further.
Step 5: Test rsync with sudo over ssh.
$ rsync -e "ssh -t" --rsync-path='sudo rsync' \ -av email@example.com:/var/www/ /tmp/
Ok. The account on the server is named bkupacct and the private RSA key is ~/.ssh/key_bkup_rsa on the client. On the server, the account bkupacct is a member of the group autobackup.
The public key, ~/.ssh/key_bkup_rsa.pub, has been copied to the account bkupacct on server and placed in ~/.ssh/authorized_keys there.
The following directories on server are owned by root and belong to the group bkupacct and not group readable, but not group writeable, and definitely not world readable: ~ and ~/.ssh. Same for the file ~/.ssh/authorized_keys there. (This assumes you are not also using ACLs) This is one way of many to set permissions on the server:
$ sudo chown root:bkupacct ~ $ sudo chown root:bkupacct ~/.ssh/ $ sudo chown root:bkupacct ~/.ssh/authorized_keys $ sudo chmod u=rwx,g=rx,o= ~ $ sudo chmod u=rwx,g=rx,o= ~/.ssh/ $ sudo chmod u=rwx,g=r,o= ~/.ssh/authorized_keys
Say you're backing up from server to client. rsync on the client uses ssh to make the connection to rsync on the server. rsync is invoked from client like this to see exactly what parameters are being passed to the server:
$ rsync \ -e "ssh \ -i ~/.ssh/key_bkup_rsa \ -t \ -l bkupacct" \ --rsync-path='sudo rsync' \ --delete \ --archive \ --compress \ --verbose \ bkupacct@server:/var/www \ /media/backups/server/backup/
sudo will need to be configured on the server. The argument --rsync-path tells the server what to run in place of rsync. In this case it runs sudo rsync. The argument -e says which remote shell tool to use. In this case it is ssh. For the SSH client being called by the rsync client, -i says which key, specifically, to use. That is independent of whether or not an authentication agent is used for ssh keys. Having more than one key is a possibility, since it is possible to have different keys for different tasks.
Keep making adjustments to /etc/sudoers on the server until it works as it should. You can find the exact settings(s) to use in /etc/sudoers by running the SSH in verbose mode (-v) on the client. Be careful when working with patterns not to match more that is safe.
%autobackup ALL=(ALL) NOPASSWD: /usr/local/bin/rsync --server \ --sender -vlogDtpre.if . /var/www/
Backup using tar
The main choice for creating archives is tar. But since it copies whole files and directories, rsync is usually much more efficient for updates or incremental backups.
The following will make a tarball of the directory /var/www and send it via stdout into sdtin via a pipe into ssh where, on the remote machine it is directed into the file called backup.tar. Here tar runs on a local machine and stores the tarball remotely:
$ tar cf - /var/www/ | ssh -l fred server.example.org "cat > backup.tar"
There are really limitless options for that recipe:
$ tar zcf - /var/www/ /home/*/www/ \ | ssh -l fred server.example.org "cat > $(date +"%Y-%m-%d").tar.gz
That will do the same, but also get user www directories, compress the tarball using gzip, and label the resulting file according to the current date.
$ tar zcf - /var/www/ /home/*/www/ \ | ssh -i key -l fred server.example.org "cat > $(date +"%Y-%m-%d").tgz
It is just as easy to tar what is on a remote machine and store the tarball locally.
$ ssh firstname.lastname@example.org "tar zcf - /var/www/" > backup.tgz
Or a fancier example of running tar on the remote machine but storing the tarball locally.
$ ssh -i key -l fred server.example.org "tar jcf - /var/www/ /home/*/www/" \ > $(date +"%Y-%m-%d").tar.bz2
The secret to the backup is the use of stdout and stdin to effect the transfer through judicious use of pipes and redirects.
Backup using dump
Using dump remotely is like using tar. One can copy from the remote server to the local server.
$ ssh -t source.example.org 'sudo dump -0an -f - /var/www | gzip -c9' > backup.dump.gz
Note that the password prompt for sudo might not be visible and it must be typed blindly.
Or one can go the other direction, copying from the locate server to the remote:
$ sudo dump -0an -f - /var/www | gzip -c9 | ssh target.example.org 'cat > backup.dump.gz'
Note that here the password prompt might get hidden in the initial output from dump. It's still there.