Re: [PATCH] mm: vmscan: Correctly check if reclaimer shouldschedule during shrink_slab

From: Colin Ian King
Date: Thu May 19 2011 - 07:36:43 EST


On Thu, 2011-05-19 at 09:09 +0900, Minchan Kim wrote:
> Hi Colin.
>
> Sorry for bothering you. :(

No problem at all, I've very happy to re-test.

> I hope this test is last.
>
> We(Mel, KOSAKI and me) finalized opinion.
>
> Could you test below patch with patch[1/4] of Mel's series(ie,
> !pgdat_balanced of sleeping_prematurely)?
> If it is successful, we will try to merge this version instead of
> various cond_resched sprinkling version.

tested with the patch below + patch[1/4] of Mel's series. 300 cycles,
2.5 hrs of soak testing: works OK.

Colin
>
>
> On Wed, May 18, 2011 at 1:15 AM, Mel Gorman <mgorman@xxxxxxx> wrote:
> > It has been reported on some laptops that kswapd is consuming large
> > amounts of CPU and not being scheduled when SLUB is enabled during
> > large amounts of file copying. It is expected that this is due to
> > kswapd missing every cond_resched() point because;
> >
> > shrink_page_list() calls cond_resched() if inactive pages were isolated
> > which in turn may not happen if all_unreclaimable is set in
> > shrink_zones(). If for whatver reason, all_unreclaimable is
> > set on all zones, we can miss calling cond_resched().
> >
> > balance_pgdat() only calls cond_resched if the zones are not
> > balanced. For a high-order allocation that is balanced, it
> > checks order-0 again. During that window, order-0 might have
> > become unbalanced so it loops again for order-0 and returns
> > that it was reclaiming for order-0 to kswapd(). It can then
> > find that a caller has rewoken kswapd for a high-order and
> > re-enters balance_pgdat() without ever calling cond_resched().
> >
> > shrink_slab only calls cond_resched() if we are reclaiming slab
> > pages. If there are a large number of direct reclaimers, the
> > shrinker_rwsem can be contended and prevent kswapd calling
> > cond_resched().
> >
> > This patch modifies the shrink_slab() case. If the semaphore is
> > contended, the caller will still check cond_resched(). After each
> > successful call into a shrinker, the check for cond_resched() is
> > still necessary in case one shrinker call is particularly slow.
> >
> > This patch replaces
> > mm-vmscan-if-kswapd-has-been-running-too-long-allow-it-to-sleep.patch
> > in -mm.
> >
> > [mgorman@xxxxxxx: Preserve call to cond_resched after each call into shrinker]
> > From: Minchan Kim <minchan.kim@xxxxxxxxx>
> > Signed-off-by: Mel Gorman <mgorman@xxxxxxx>
> > ---
> > mm/vmscan.c | 9 +++++++--
> > 1 files changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index af24d1e..0bed248 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -230,8 +230,11 @@ unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
> > if (scanned == 0)
> > scanned = SWAP_CLUSTER_MAX;
> >
> > - if (!down_read_trylock(&shrinker_rwsem))
> > - return 1; /* Assume we'll be able to shrink next time */
> > + if (!down_read_trylock(&shrinker_rwsem)) {
> > + /* Assume we'll be able to shrink next time */
> > + ret = 1;
> > + goto out;
> > + }
> >
> > list_for_each_entry(shrinker, &shrinker_list, list) {
> > unsigned long long delta;
> > @@ -282,6 +285,8 @@ unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
> > shrinker->nr += total_scan;
> > }
> > up_read(&shrinker_rwsem);
> > +out:
> > + cond_resched();
> > return ret;
> > }
> >
> >
>
>
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/