Re: [RFC] Tracking mlocked pages and moving them off the LRU
From: Christoph Lameter
Date: Wed Feb 07 2007 - 03:03:02 EST
On Tue, 6 Feb 2007, Nate Diller wrote:
> > The dirty ratio with the ZVCS would be
> >
> > NR_DIRTY + NR_UNSTABLE_NFS
> > /
> > NR_FREE_PAGES + NR_INACTIVE + NR_ACTIVE + NR_MLOCK
>
> I don't understand why you want to account mlocked pages in
> dirty_ratio. of course mlocked pages *can* be dirty, but they have no
> relevance in the write throttling code. the point of dirty ratio is
mlocked pages can be counted as dirty pages. So if we do not include
NR_MLOCK in the number of total pages that could be dirty then we may in
some cases have >100% dirty pages.
> to guarantee that there are some number of non-dirty, non-pinned,
> non-mlocked pages on the LRU, to (try to) avoid deadlocks where the
> writeback path needs to allocate pages, which many filesystems like to
> do. if an mlocked page is clean, there's still no way to free it up,
> so it should not be treated as being on the LRU at all, for write
> throttling. the ideal (IMO) dirty ratio would be
Hmmm... I think write throttling is different from reclaim. In write
throttling the major objective is to decouple the applications from
the physical I/O. So the dirty ratio specifies how much "buffer" space
can be used for I/O. There is an issue that too many dirty pages will
cause difficulty for reclaim because pages can only be reclaimed after
writeback is complete.
And yes this is not true for mlocked pages.
>
> NR_DIRTY - NR_DIRTY_MLOCKED + NR_UNSTABLE_NFS
> /
> NR_FREE_PAGES + NR_INACTIVE + NR_ACTIVE
>
> obviously it's kinda useless to keep an NR_DIRTY_MLOCKED counter, any
> of these mlock accounting schemes could easily be modified to update
> the NR_DIRTY counter so that it only reflects dirty unpinned pages,
> and not mlocked ones.
So you would be okay with dirty_ratio possibly be >100% of mlocked pages
are dirty?
> is that the only place you wanted to have an accurate mocked page count?
Rik had some other ideas on what to do with it. I also think we may end up
checking for excessive high mlock counts in various tight VM situations.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/