Re: [PATCH -mm] throttle direct reclaim when too many pages areisolated already (v3)

From: Andrew Morton
Date: Thu Jul 16 2009 - 00:03:23 EST


On Wed, 15 Jul 2009 23:53:18 -0400 Rik van Riel <riel@xxxxxxxxxx> wrote:

> @@ -1049,6 +1074,14 @@ static unsigned long shrink_inactive_lis
> struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc);
> int lumpy_reclaim = 0;
>
> + while (unlikely(too_many_isolated(zone, file, sc))) {
> + congestion_wait(WRITE, HZ/10);
> +
> + /* We are about to die and free our memory. Return now. */
> + if (fatal_signal_pending(current))
> + return SWAP_CLUSTER_MAX;
> + }

mutter.

While I agree that handling fatal signals on the direct reclaim path
is probably a good thing, this seems like a fairly random place at
which to start the enhancement.

If we were to step back and approach this in a broader fashion, perhaps
we would find some commonality with the existing TIF_MEMDIE handling,
dunno.


And I question the testedness of v3 :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/