Re: Crash in MM code in v4.4.y, v4.9.y with TRANSPARENT_HUGEPAGE enabled

From: Guenter Roeck
Date: Fri Aug 17 2018 - 20:44:52 EST


On 08/17/2018 05:25 PM, Linus Torvalds wrote:
On Fri, Aug 17, 2018 at 3:27 PM Guenter Roeck <linux@xxxxxxxxxxxx> wrote:

[ 6.649970] random: crng init done
[ 6.689002] BUG: unable to handle kernel paging request at ffffeafffa1a0020

Hmm. Lots of bits set.

[ 6.689082] RIP: 0010:[<ffffffff8116ba10>] [<ffffffff8116ba10>] page_remove_rmap+0x10/0x230
[ 6.689082] RSP: 0018:ffffc900007abc18 EFLAGS: 00000296
[ 6.689082] RAX: ffffea0005e58000 RBX: ffffeafffa1a0000 RCX: 0000000020200000
[ 6.689082] RDX: 00003fffffe00000 RSI: 0000000000000001 RDI: ffffeafffa1a0000

Is that RDX value the same value as PHYSICAL_PMD_PAGE_MASK?

If I did my math right, it would be, if your CPU has 46 bits of
physical memory. Might that be the case?

Yes.

The reason I mention that is because we had the bug with spurious
inversion of the zero pte/pmd, fixed by

f19f5c49bbc3 ("x86/speculation/l1tf: Exempt zeroed PTEs from inversion")

I applied that patch, but it didn't help. I get exactly the same crash and
traceback.

and that would make a zeroed pmd entry be inverted by
PHYSICAL_PMD_PAGE_MASK, and then you get odd garbage page pointers
etc.

Maybe. I could have gotten the math wrong too, but it sounds like the
register contents _potentially_ might match up with something like
this, and then we'd zap a bogus hugepage because of some confusion.

Although then I'd have expected the bisection to hit
"x86/speculation/l1tf: Invert all not present mappings" instead of the
one you hit, so I don't know.

Plus I'd have expected the problem to have been in mainline too, and
apparently it's just the 4.4 and 4.9 backports.

Personally I suspect that something went wrong or is missing in the backport
from 4.14 to 4.9. 5-level paging was introduced in between, and thp support
was extended to support additional architectures. With all those changes,
it is easy to miss something. Only I have no idea what that might be.

Guenter