Re: [PATCH]: Free pages from local pcp lists under tight memoryconditions
From: Rohit Seth
Date: Tue Nov 29 2005 - 18:11:42 EST
On Wed, 2005-11-23 at 19:02 -0800, Paul Jackson wrote:
> Rohit wrote:
> > I thought Nick et.al came up with some of the constant values like batch
> > size to tackle the page coloring issue specifically.
>
> I think this came about on a linux-ia64 thread started by Jack Steiner:
>
> http://www.gelato.unsw.edu.au/archives/linux-ia64/0504/13668.html
> Subject: per_cpu_pagesets degrades MPI performance
> From: Jack Steiner <steiner_at_sgi.com>
> Date: 2005-04-05 05:28:27
>
> Jack reported that per_cpu_pagesets were degrading some MPI benchmarks due
> to adverse page coloring. Nick responded, recommending a non-power of two
> batch size. Jack found that this helped nicely. This thread trails off,
> but seems to be the origins of the 2**n-1 batch size in:
>
> mm/page_alloc.c:zone_batchsize()
> * Clamp the batch to a 2^n - 1 value. Having a power ...
> batch = (1 << fls(batch + batch/2)) - 1;
>
> I don't see here evidence that "per_cpu_pagelist is ... one single main
> reason the coloring effect is drastically reduced in 2.6 (over 2.4)
> based kernels." Rather in this case anyway a batch size not a power of
> two was apparently needed to keep per_cpu_pagesets from hurting
> performance due to page coloring affects on some workloads.
>
Well, the batch size of a list ( + the high mark) are integral part of
per_cpu_pagelist infrastructure. Tuning is always required. I don't
think though one fixed set of values is fixing all the cases.
Can you please comment on the performance delta on the MPI workload
because of this change in batch values. And what were the numbers
before per_cpu_pagelists were introduced.
thanks,
-rohit
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/