Re: [Xen-devel] NUMA_BALANCING and Xen PV guest regression in 3.20-rc0
From: Mel Gorman
Date: Mon Feb 23 2015 - 10:46:47 EST
On Mon, Feb 23, 2015 at 03:13:48PM +0000, Dario Faggioli wrote:
> Hi everyone,
>
> On Thu, 2015-02-19 at 17:01 +0000, Mel Gorman wrote:
> > On Thu, Feb 19, 2015 at 01:06:53PM +0000, David Vrabel wrote:
>
> > I cannot think of a reason why this would fail for NUMA balancing on bare
> > metal. The PAGE_NONE protection clears the present bit on p[te|md]_modify
> > so the expectations are matched before or after the patch is applied. So,
> > for bare metal at least
> >
> > Acked-by: Mel Gorman <mgorman@xxxxxxx>
> >
> > I *think* this will work ok with Xen but I cannot 100% convince myself.
> > I'm adding Wei Liu to the cc who may have a Xen PV setup handy that
> > supports NUMA and may be able to test the patch to confirm.
> >
> I'm not Wei, but I've been able to test a kernel with David's patch in
> the following conditions:
>
Thanks very much, it's appreciated.
> 1. as Dom0 kernel, when Xen does not have any virtual NUMA support
> 2. as DomU PV kernel, when Xen does not have any virtual NUMA support
> 3. as DomU PV kernel, when Xen _does_ _have_ virtual NUMA support
> (i.e., Wei's code)
>
> Cases 1. and 2. have been, I believe, tested by David already, but
> anyways... :-)
>
> Case 3. worked well for me, as the following commands show. In fact,
> with this in guest config file:
>
> <SNIP>
>
> And this:
>
> root@test-pv:~# grep numa /proc/vmstat
> numa_hit 65987
> numa_miss 0
> numa_foreign 0
> numa_interleave 14473
> numa_local 58642
> numa_other 7345
> numa_pte_updates 596
> numa_huge_pte_updates 0
> numa_hint_faults 479
> numa_hint_faults_local 420
> numa_pages_migrated 51
>
> So, yes, I would say this wok with Xen, is that correct, Mel?
>
Yes, these stats indicate that NUMA balancing is active. There is no
much activity but it's there.
> I'll give it a try at running more complex stuff like 'perf bench numa'
> inside the guest and see what happens...
>
Thanks.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/