Re: [PATCH 4/4] Reintroduce zone_reclaim_interval for whenzone_reclaim() scans and fails to avoid CPU spinning at 100% on NUMA
From: Andrew Morton
Date: Wed Jun 10 2009 - 01:55:40 EST
On Tue, 9 Jun 2009 18:01:44 +0100 Mel Gorman <mel@xxxxxxxxx> wrote:
> On NUMA machines, the administrator can configure zone_reclaim_mode that is a
> more targetted form of direct reclaim. On machines with large NUMA distances,
> zone_reclaim_mode defaults to 1 meaning that clean unmapped pages will be
> reclaimed if the zone watermarks are not being met. The problem is that
> zone_reclaim() may get into a situation where it scans excessively without
> making progress.
>
> One such situation occured where a large tmpfs mount occupied a
> large percentage of memory overall. The pages did not get reclaimed by
> zone_reclaim(), but the lists are uselessly scanned frequencly making the
> CPU spin at 100%. The observation in the field was that malloc() stalled
> for a long time (minutes in some cases) when this situation occurs. This
> situation should be resolved now and there are counters in place that
> detect when the scan-avoidance heuristics break but the heuristics might
> still not be bullet proof. If they fail again, the kernel should respond
> in some fashion other than scanning uselessly chewing up CPU time.
>
> This patch reintroduces zone_reclaim_interval which was removed by commit
> 34aa1330f9b3c5783d269851d467326525207422 [zoned vm counters: zone_reclaim:
> remove /proc/sys/vm/zone_reclaim_interval. In the event the scan-avoidance
> heuristics fail, the event is counted and zone_reclaim_interval avoids
> excessive scanning.
More distressed fretting!
Pages can be allocated and freed and reclaimed at rates anywhere
between zero per second to one million per second or more. So what
sense does it make to pace MM activity by wall-time??
A better clock for pacing MM activity is page-allocation-attempts, or
pages-scanned, etc.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/