How about e2fsck doing it? The only thing is that some news servers might
not reboot for around a year, and a news server is one case where -
* the server is _never_ idle so a low-priority thread does very little
* directories may bloat massively for a short time then never be large again
but cause massive performance loss
* the slowdown is very significant. I have a perl script which goes around
and makes a new directory, moves articles and removes the old one if the
ratio of estimated directory size by file names and actual directory size
is bad and the total directory size is big - I run this when the news
server gets too slow and, magic, it's fast again [and yes, I've killed
*.jobs* and tiered control.cancel into 256 subdirs already].
(this is somewhat improved by the inum hack reducing the number of
open()'s however)
* you wouldn't want to hook every single unlink()
Maybe open() should take note of directories which it thinks are quite bad
by some simple heuristic (read - many wasted blocks) and set a timer to do
something about it (so latency isn't introduced into open() by doing it).
David.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html