On Thursday 05 September 2002 20:51, Andrew Morton wrote:
> Daniel Phillips wrote:
> >
> > ..
> > On the other hand, if what you want is a private list that page_cache_release
> > doesn't act on automatically, all you have to do is set the lru state to zero,
> > leave the page count incremented and move to the private list. You then take
> > explicit responsibility for freeing the page or moving it back onto a
> > mainstream lru list.
>
> That's the one. Page reclaim speculatively removes a chunk (typically 32) of
> pages from the LRU, works on them, and puts back any unfreeable ones later
> on.
Convergent evolution. That's exactly the same number I'm handling as a
chunk in my rmap sharing project (backgrounded for the moment). In this
case, the 32 comes from the number of bits you can store in a long, and
it conveniently happens to fall in the sweet spot of performance as well.
In this case I only have to manipulate one list element though, because
I'm taking the lru list itself out onto the shared rmap node, saving 8
bytes in struct page as a side effect. So other than the size of the
chunk, the resemblance is slight. ETA for this is 2.7, unless you
suddenly start threatening to back out rmap again.
> And the rest of the VM was taught to play correctly with pages that
> can be off the LRU. This was to avoid hanging onto the LRU lock while
> running page reclaim.
>
> And when those 32 pages are speculatively removed, their refcounts are
> incremented. Maybe that isn't necessary - I'd need to think about
> that. If it isn't, then the double-free thing is fixed. If it is
> necessary then then lru-adds-a-ref approach is nice, because shrink_cache
> doesn't need to page_cache_get each page while holding the LRU lock,
> as you say.
I think the extra refcount strategy is inherently stronger, and this
is an example of why. The other would require you to take/drop an
extra count explicitly for your private list.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sat Sep 07 2002 - 22:00:26 EST