It seems like the posix idea of unique <st_dev, st_ino> doesn't
hold water for modern file systems
are you really sure?
Well Jan's example was of Coda that uses 128-bit internal file ids.
and if so, why don't we fix *THAT* instead
Hmm, sometimes you can't fix the world, especially if the filesystem
is exported over NFS and has a problem with fitting its file IDs uniquely
into a 64-bit identifier.
Note, it's pretty easy to fit _anything_ into a 64-bit identifier with
the use of a good hash function. The chance of an accidental
collision is infinitesimally small. For a set of
100 files: 0.00000000000003%
1,000,000 files: 0.000003%
I do not think we want to play with probability like this. I mean...
imagine 4G files, 1KB each. That's 4TB disk space, not _completely_
unreasonable, and collision probability is going to be ~100% due to
birthday paradox.
You'll still want to back up your 4TB server...
Certainly, but tar isn't going to remember all the inode numbers.
Even if you solve the storage requirements (not impossible) it would
have to do (4e9^2)/2=8e18 comparisons, which computers don't have
enough CPU power just yet.
It doesn't matter if there are collisions within the filesystem, as
long as there are no collisions between the set of files an
application is working on at the same time.
Miklos