Re: Handling NUMA page migration
From: Michal Hocko
Date: Wed Jun 05 2013 - 05:56:35 EST
On Wed 05-06-13 11:32:15, Frank Mehnert wrote:
[...]
> Thank you very much for your help. As I said, this problem happens _only_
> with NUMA_BALANCING enabled. I understand that you treat the VirtualBox
> code as untrusted but the reason for the problem is that some assumption
> is obviously not met: The VirtualBox code assumes that the memory it
> allocates using case A and case B is
>
> 1. always present and
> 2. will always be backed by the same phyiscal memory
>
> over the entire life time. Enabling NUMA_BALANCING seems to make this
> assumption false. I only want to know why.
As I said earlier. Both the manual node migration and numa_fault handler
do not migrate pages with elevated ref count (your A case) and pages
that are not on the LRU. So if your Referenced pages might be on the LRU
then you probably have to look into numamigrate_isolate_page and do an
exception for PageReserved pages. But I am a bit suspicious this is the
cause because the reclaim doesn't consider PageReserved pages either so
they could get reclaimed. Or maybe you have handled that path in your
kernel.
Or the other option is that you depend on a timing or something like
that which doesn't hold anymore. That would be hard to debug though.
> I see, you don't believe me. I will add more code to the kernel logging
> which pages were migrated.
Simple test for PageReserved flag in numamigrate_isolate_page should
tell you more.
This would cover the migration part. Another potential problem could be
that the page might get unmapped and marked for the numa fault (see
do_numa_page). So maybe your code just assumes that the page even
doesn't get unmapped?
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/