Re: [PATCH v2] KSM: numa awareness sysfs knob

From: Izik Eidus
Date: Sat Jun 30 2012 - 05:50:39 EST


On 06/30/2012 01:50 AM, David Rientjes wrote:
On Fri, 29 Jun 2012, Andrew Morton wrote:

I've tested this patch on numa machines with 2, 4 and 8 nodes and
measured speed of memory access inside of KVM guests with memory pinned
to one of nodes with this benchmark:

http://pholasek.fedorapeople.org/alloc_pg.c

Population standard deviations of access times in percentage of average
were following:

merge_nodes=1
2 nodes 1.4%
4 nodes 1.6%
8 nodes 1.7%

merge_nodes=0
2 nodes 1%
4 nodes 0.32%
8 nodes 0.018%
ooh, numbers! Thanks.

Ok, the standard deviation increases when merging pages from nodes with
remote distance, that makes sense. But if that's true, then you would
restrict either the entire application to local memory with mempolicies or
cpusets, or you would use mbind() to restrict this memory to that set of
nodes already so that accesses, even with ksm merging, would have
affinity.

While you are right for case you write your own custom application,
but I think the KVM guest case is little bit more problomatic in case the guest memory must be splitted across serval nodes.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/