Re: problems with large directories - not only ext2

Hermann Schichl (herman@esi.ac.at)
Mon, 09 Aug 1999 23:28:03 +0200


> Linux gets extremely slow on operations like "ls", when done on large
> directories (1000+ files). For example, when listing a few large news
> directories I can watch how my window manager re-draws its windows, and
> typing in commands is awkward (but possible).
>
> Until now I thought this was a limitation of ext2. But just now I ls -lR a cd
> with large directories, and the result was just the same. Here is a vmstat
> listing:
>
I'd guess that this is for two reasons:
a) the -l flag forces ls to stat every single file
b) not having turned off sorting forces ls to sort the contents of
the directories in alphabetical order, which might cause creation
of intermediate temporary files or swapping if size is very big
(a -R does not improve this situation).
This is not a situation of Linux (IMHO) but one of UNIX. Most Unices
show this behaviour. Probably if you try ls -fR, times should go linearly
with directory size.

But maybe I'm wrong since I did not do too much of testing.

Hermann

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/