On Feb 29 2000, nbecker@fred.net wrote:
> I suspect the answer is that the data structures used to represent
> directories are not well suited to efficient operations on extremely
> large directories, but I was just curious.
AFAIK, there are two possible things you should do to avoid
this:
* hash the files in two (or more) directory levels;
* if that's not possible, then use a filesystem whose
directory structure uses a tree instead of a linear list (it
seems -- but I'm not sure -- that reiserfs does this).
[]s, Roger...
-- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Rogerio Brito - rbrito@iname.com - http://www.ime.usp.br/~rbrito/ Nectar homepage: http://www.linux.ime.usp.br/~rbrito/opeth/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Tue Mar 07 2000 - 21:00:09 EST