Re: [PATCH] Invalidate_inodes can be very slow

From: Kirill Korotaev
Date: Mon Oct 13 2003 - 06:44:59 EST


> William Lee Irwin III <wli@xxxxxxxxxxxxxx> wrote:
> >> Untested brute-force forward port to 2.6.0-test7-bk4. No idea if the
> >> locking is correct or if list movement is done in all needed places.
>
> On Mon, Oct 13, 2003 at 04:08:21AM -0700, Andrew Morton wrote:
> > My preferred approach to this would be to move all the global inode lists
> > into the superblock so both they and inode_lock become per-sb.
> > It is a big change though. And, amazingly, nobody has yet hit sufficient
> > inode_lock contention to warrant it.
> > Yes, I bet that this search will hurt like hell on a really big box which
> > has thousands of auto-expiring NFS mounts. Please test your patch and
> > I'll queue it up while we think about it some more.
>
> Generally dcache_lock stands in front of inode_lock, even with the
> current hashtable RCU code. inode_lock has been seen before in unusual
> situations I don't remember offhand, though generally it's not #1.
> The workloads used for, say, benchmark testing don't adequately model
> situations like what you just mentioned (or a number of other real-life
> usage cases), so per-sb inode_lock may be worth considering on a priori
> grounds, though it would probably be better to actually set something
> up to test that scenario.
This patch can be tested quite easily. Main changes are in invalidate_list.
This path can be triggered by umount/quota off.
So I tested it the following way:
1. mounting/working/umounting partition (and turning quota on/off)
2. running memeater to call prune_icache when partition was mounted
to test that lists are ok.

All other places should be ok, since simple list_add/list_del is inserted.

Kirill

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/