Re: [Xen-devel] [PATCH v2 0/2] xen: vnuma introduction for pv guest

From: Dario Faggioli
Date: Wed Dec 04 2013 - 20:13:50 EST


On mer, 2013-12-04 at 01:20 -0500, Elena Ufimtseva wrote:
> On Tue, Dec 3, 2013 at 7:35 PM, Elena Ufimtseva <ufimtseva@xxxxxxxxx> wrote:
> > Oh guys, I feel really bad about not replying to these emails... Somehow these
> > replies all got deleted.. wierd.
> >
No worries... You should see *my* backlog. :-P

> > Ok, about that automatic balancing. At the moment of the last patch
> > automatic numa balancing seem to
> > work, but after rebasing on the top of 3.12-rc2 I see similar issues.
> > I will try to figure out what commits broke and will contact Ingo
> > Molnar and Mel Gorman.
> >
> As of now I have patch v4 for reviewing. Not sure if it will be
> beneficial to post it for review
> or look closer at the current problem.
>
You mean the Linux side? Perhaps stick somewhere a reference to the git
tree/branch where it lives, but, before re-sending, let's wait for it to
be as issue free as we can tell?

> The issue I am seeing right now is defferent from what was happening before.
> The corruption happens when on change_prot_numa way :
>
Ok, so, I think I need to step back a bit from the actual stack trace
and look at the big picture. Please, Elena or anyone, correct me if I'm
saying something wrong about how Linux's autonuma works and interacts
with Xen.

The way it worked when I last looked at it was sort of like this:
- there was a kthread scanning all the pages, removing the PAGE_PRESENT
bit from actually present pages, and adding a new special one
(PAGE_NUMA or something like that);
- when a page fault is triggered and the PAGE_NUMA flag is found, it
figures out the page is actually there, so no swap or anything.
However, it tracks from what node the access to that page came from,
matches it with the node where the page actually is and collect some
statistics about that;
- at some point (and here I don't remember the exact logic, since it
changed quite a few times) pages ranking badly in the stats above are
moved from one node to another.

Is this description still accurate? If yes, here's what I would (double)
check, when running this in a PV guest on top of Xen:

1. the NUMA hinting page fault, are we getting and handling them
correctly in the PV guest? Are the stats in the guest kernel being
updated in a sensible way, i.e., do they make sense and properly
relate to the virtual topology of the guest?
At some point we thought it would have been necessary to intercept
these faults and make sure the above is true with some help from the
hypervisor... Is this the case? Why? Why not?

2. what happens when autonuma tries to move pages from one node to
another? For us, that would mean in moving from one virtual node
to another... Is there a need to do anything at all? I mean, is
this, from our perspective, just copying the content of an MFN from
node X into another MFN on node Y, or do we need to update some of
our vnuma tracking data structures in Xen?

If we have this figured out already, then I think we just chase bugs and
repost the series. If not, well, I think we should. :-D

Thanks and Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part