On Fri, Apr 08, 2005 at 05:12:49PM -0700, Linus Torvalds wrote:really designed for something like a offline http grabber, in that you can
just grab files purely by filename (and verify that you got them right by
running sha1sum on the resulting local copy). So think "wget".
I'm not entirely convinced wget is going to be an efficient way to
synchronize and fetch your tree, its simplicitly is great though. It's a
tradeoff between optimzing and re-using existing tools (like webservers).
Perhaps that's why you were compressing the stuff too? It sounds better
not to compress the stuff on-disk, and to synchronize with a rsync-like
protocol (rsync server would make it) that handles the compression in
the network protocol itself, and in turn that can apply compression to a
large blob (i.e. the diff between the trees), and not to the single tiny
files.