Re: huge filesystems

From: Eric W. Biederman
Date: Sat Mar 19 2005 - 06:11:53 EST


Andreas Dilger <adilger@xxxxxxx> writes:

> On Mar 14, 2005 21:37 -0700, jmerkey wrote:
> > 1. Scaling issues with readdir() with huge numbers of files (not even
> > huge really. 87000 files in a dir takes a while
> > for readdir() to return results). I average 2-3 million files per
> > directory on 2.6.9. It can take a up to a minute for
> > readdir() to return from initial reading from on of these directories
> > with readdir() through the VFS.
>
> Actually, unless I'm mistaken the problem is that "ls" (even when you
> ask it not to sort entries) is doing readdir on the whole directory
> before returning any results. We see this with Lustre and very large
> directories. Run strace on "ls" and it is doing masses of readdirs, but
> no output to stdout. Lustre readdir works OK on directories up to 10M
> files, but ls sucks.

The classic test is does 'echo *' which does the readdir but not the
stat come back quickly?

Anyway most of the readdir work is in the filesystem so I don't see
how the VFS would be involved....

Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/