Re: Large slab cache in 2.6.1
From: Dipankar Sarma
Date: Sun Feb 22 2004 - 16:14:23 EST
On Sun, Feb 22, 2004 at 08:08:43AM -0800, Martin J. Bligh wrote:
> I still don't understand the rationale behind the way we currently do it -
> perhaps I'm just being particularly dense. If we have 10,000 pages full of
> dcache, and start going through shooting entries by when they were LRU wrt
> the entries, not the dcache itself, then (assuming random access to dcache),
> we'll evenly shoot the same number of entries from each dcache page without
> actually freeing any pages at all, just trashing the cache.
>
> Now I'm aware access isn't really random, which probably saves our arse.
> But then some of the entries will be locked too, which only makes things
> worse (we free a bunch of entries from that page, but the page itself
> still isn't freeable). So it still seems likely to me that we'll blow
> away at least half of the dcache entries before we free any significant
> number of pages at all. That seems insane to me. Moreover, the more times
> we shrink & fill, the worse the layout will get (less grouping of "recently
> used entries" into the same page).
Do you have a quick test to demonstrate this ? That would be useful.
> Moreover, it seems rather expensive to do a write operation for each
> dentry to maintain the LRU list over entries. But maybe we don't do that
> anymore with dcache RCU - I lost track of what that does ;-( So doing it
> on the page LRU basis still makes a damned sight more sense to me. Don't
> we want semantics like "once used vs twice used" preference treatment
> for dentries, etc anyway?
Dcache-RCU hasn't changed the dentry freeing to slab much, it is still
LRU. Given a CPU, dentries are still returned to the slab
in dcache LRU order.
I have always wondered about how useful the global dcache LRU
mechanism is. This adds another reason for us to go experiment
with it.
Thanks
Dipankar
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/