Re: [PATCH 2/3] x86: Define _PAGE_NUMA with unused physical address bits PMD and PTE levels

From: Mel Gorman
Date: Mon Apr 07 2014 - 14:29:14 EST


On Mon, Apr 07, 2014 at 08:19:10PM +0400, Cyrill Gorcunov wrote:
> On Mon, Apr 07, 2014 at 04:49:35PM +0100, Mel Gorman wrote:
> > On Mon, Apr 07, 2014 at 04:32:39PM +0100, David Vrabel wrote:
> > > On 07/04/14 16:10, Mel Gorman wrote:
> > > > _PAGE_NUMA is currently an alias of _PROT_PROTNONE to trap NUMA hinting
> > > > faults. As the bit is shared care is taken that _PAGE_NUMA is only used in
> > > > places where _PAGE_PROTNONE could not reach but this still causes problems
> > > > on Xen and conceptually difficult.
> > >
> > > The problem with Xen guests occurred because mprotect() /was/ confusing
> > > PROTNONE mappings with _PAGE_NUMA and clearing the non-existant NUMA hints.
> >
> > I didn't bother spelling it out in case I gave the impression that I was
> > blaming Xen for the problem. As the bit is now changes, does it help
> > the Xen problem or cause another collision of some sort? There is no
> > guarantee _PAGE_NUMA will remain as bit 62 but at worst it'll use bit 11
> > and NUMA_BALANCING will depend in !KMEMCHECK.
>
> Fwiw, we're using bit 11 for soft-dirty tracking, so i really hope worst case
> never happen. (At the moment I'm trying to figure out if with this set
> it would be possible to clean up ugly macros in pgoff_to_pte for 2 level pages).

I had considered the soft-dirty tracking usage of the same bit. I thought I'd
be able to swizzle around it or a further worst case of having soft-dirty and
automatic NUMA balancing mutually exclusive. Unfortunately upon examination
it's not obvious how to have both of them share a bit and I suspect any
attempt to will break CRIU. In my current tree, NUMA_BALANCING cannot be
set if MEM_SOFT_DIRTY which is not particularly satisfactory. Next on the
list is examining if _PAGE_BIT_IOMAP can be used.

--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/