I've seen lots of questions along the lines of "How do you safely transfer data to a remote backup server?" without doing things like allowing root SSH access between systems.
The method that's worked best for me is: create a non-privileged account (i.e., "bkup") for all your remote copying, so you can collect your files as root if necessary but never have to allow root to do anything on a remote host:
me% getent passwd bkup bkup:*:47:47:Remote backups:/home/bkup:/bin/sh
I usually set bkup's password to 40 or 50 random characters, since there's no reason for anyone to login as bkup.
The setuidgid program from Dan Bernstein's daemontools is very useful here; I can run anything as any user without having to dork around with getting the quoting right when running su:
root# setuidgid username command you want to run
Example:
root# setuidgid bkup id uid=47(bkup) gid=47(bkup) groups=47(bkup)
Here's a small script which accepts a list of files to copy, uses tar to batch them up, and then uses ssh to dump them as a gzipped archive on another system:
#!/bin/ksh #<tar2bk: accept list of files, dump it to backup server # source filename is (say) /path/to/list # destination filename is $TMPDIR/basename-of-list.tgz export PATH=/usr/local/bin:/bin:/usr/bin TMPDIR=${TMPDIR:-/tmp} ident='/path/to/ssh/ident/file' cipher='chacha20-poly1305@openssh.com' host='local.backup.com' # Only argument is a list of files to copy. case "$#" in 0) echo need a list of files; exit 1 ;; *) list="$1" ;; esac test -f "$list" || { echo $list not found; exit 2; } b=$(basename $list) # If root's running this, use setuidgid. id | grep 'uid=0(root)' > /dev/null case "$?" in 0) copycmd="setuidgid bkup ssh -c $cipher" ;; *) copycmd="ssh -c $cipher -i $ident" ;; esac # All that for one command. tar --no-recursion --files-from=$list -cf - | gzip -1c | $copycmd $host "/bin/cat > $TMPDIR/$b.tgz" exit 0
I generally copy to /tmp since it's a *tmpfs* drive which makes it fast, but if you need something larger or more permanent, /var/tmp works just fine:
TMPDIR=/var/tmp /path/to/tar2bk ...
If I have a huge number of files to copy, I prefer to do it in stages; copy one or two Gbytes at a time and unpack as we go.
for each sublist do create a tarball copy the tarball to staging ("temp") area on the receiving host after the copy is done, move tarball to "current" directory on the destination host done copy a sentinel file (i.e., 'finished') to "current" on receiving host
cd "current" directory while true do if (the file "finished" is present) remove it break endif else if (a tarball is present) unpack and remove it endif sleep 5 minutes done
Feel free to send comments.
Generated from article.t2t by
txt2tags
$Revision: 1.1 $
$UUID: f8d8bf95-a458-3111-8e54-ffea9a0f0dc2 $