Sorry, to clarify they were ~650-700MiB tars but combined to around a ~4GiB-----Original Message-----
From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-
Other options?
How do others maintain data integrity? Just not worry about it until
you have
to, rely on backups.. or?
Justin.
4GB files using gpg and tar in the '90s?
I know gpg had 2GB file-related bugs as late as 2005 that caused corruption, andOnce I restored the data from DVDs, I was able to restore *all* data successfully.
there was a heck of a lot of 2GB-related bugs in 2.2 and 2.4 kernels which you
must have been running back then. You are also using later versions of these
programs on the new systems, and I'd be willing to bet they compound the problem by
assuming there was no corruption to begin with.
That works as well, but are they your regular files gzipped, no encryption?
I use:
- gzipped tar archives, but I gzip the individual files, rather than the tarball. That way
any compression-related bugs are limited to a single file. I copy them to DVDs.
Wow, I did not know ZFS had plans for de-dupe!! I will have to look into this,
- For online/nearline, I now use a ZFS, but on a native Solaris system that functions as my primary
NFS/CIFS/iSCSI server with and a ZFS software-RAID based file system. I am profoundly impressed with
it, and when they release the deduplication enhancement for ZFS, I'll adopt it and won't have to buy
any more DVDs, except for offsite archiving purposes.