Re: [PATCH] x86, NUMA: Fix empty memblk detection in numa_cleanup_meminfo()

From: Yinghai Lu
Date: Sat Apr 30 2011 - 20:44:09 EST


On 04/30/2011 05:33 AM, Tejun Heo wrote:
> From: Yinghai Lu <yinghai@xxxxxxxxxx>
>
> numa_cleanup_meminfo() trims each memblk between low (0) and high
> (max_pfn) limits and discard empty ones. However, the emptiness
> detection incorrectly used equality test. If the start of a memblk is
> higher than max_pfn, it is empty but fails the equality test and
> doesn't get discarded.
>
> Fix it by using >= instead of ==.
>
> Signed-off-by: Yinghai Lu <yinghai@xxxxxxxxxx>
> Signed-off-by: Tejun Heo <tj@xxxxxxxxxx>
> ---
> So, something like this. Does this fix the problem you see?
>
> Thanks.
>
> arch/x86/mm/numa_64.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> Index: work/arch/x86/mm/numa.c
> ===================================================================
> --- work.orig/arch/x86/mm/numa.c
> +++ work/arch/x86/mm/numa.c
> @@ -191,7 +191,7 @@ int __init numa_cleanup_meminfo(struct n
> bi->end = min(bi->end, high);
>
> /* and there's no empty block */
> - if (bi->start == bi->end) {
> + if (bi->start >= bi->end) {
> numa_remove_memblk_from(i--, mi);
> continue;
> }
this one works too
but print out is some strange
on 512g system got:

SRAT: Node 0 PXM 0 0-a0000
SRAT: Node 0 PXM 0 100000-80000000
SRAT: Node 0 PXM 0 100000000-1080000000
SRAT: Node 1 PXM 1 1080000000-2080000000
SRAT: Node 2 PXM 2 2080000000-3080000000
SRAT: Node 3 PXM 3 3080000000-4080000000
SRAT: Node 4 PXM 4 4080000000-5080000000
SRAT: Node 5 PXM 5 5080000000-6080000000
SRAT: Node 6 PXM 6 6080000000-7080000000
SRAT: Node 7 PXM 7 7080000000-8080000000
NUMA: Initialized distance table, cnt=8
NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
NUMA: Node 0 [0,80000000) + [100000000,1080000000) -> [0,1000000000)


first patch on 512g system got
NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
NUMA: Node 0 [0,80000000) + [100000000,1000000000) -> [0,1000000000)

still thinking first one is more clean.

Thanks

Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/