Re: [mm 4.15-rc8] Random oopses under memory pressure.

From: Dave Hansen
Date: Tue Jan 16 2018 - 03:07:08 EST


On 01/15/2018 06:14 PM, Linus Torvalds wrote:
> But I'm adding Dave Hansen explicitly to the cc, in case he has any
> ideas. Not because I blame him, but he's touched the sparsemem code
> fairly recently, so maybe he'd have some idea on adding sanity
> checking to the sparsemem version of pfn_to_page().

I swear I haven't touched it lately!

I'm not sure I'd go after pfn_to_page(). *Maybe* if we were close to
the places where we've done a pfn_to_page(), but I'm not seeing those.
These, for instance (from the January 5th post) have sane (~500MB) PFNs
and all BUG_ON() because of seeing the page being locked at free:

[ 192.152510] BUG: Bad page state in process a.out pfn:18566
[ 77.872133] BUG: Bad page state in process a.out pfn:1873a
[ 188.992549] BUG: Bad page state in process a.out pfn:197ea

and the page in all those cases came off a list, not out of a pte or
something that would need pfn_to_page(). The page fault path leading up
to the "EIP is at page_cache_tree_insert+0xbe/0xc0" probably doesn't
have a pfn_to_page() anywhere in there at all.

Did anyone else notice the

[ 31.068198] ? vmalloc_sync_all+0x150/0x150

present in a bunch of the stack traces? That should be pretty uncommon.
Is it just part of the normal do_page_fault() stack and the stack
dumper picks up on it?

A few things from earlier in this thread:

> [ 44.103192] page:5a5a0697 count:-1055023618 mapcount:-1055030029 mapping:26f4be11 index:0xc11d7c83
> [ 44.103196] flags: 0xc10528fe(waiters|error|referenced|uptodate|dirty|lru|active|reserved|private_2|mappedtodisk|swapbacked)
> [ 44.103200] raw: c10528fe c114fff7 c11d7c83 c11d84f2 c11d9dfe c11daa34 c11daaa0 c13e65df
> [ 44.103201] raw: c13e4a1c c13e4c62
> [ 44.103202] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) <= 0)
> [ 44.103203] page->mem_cgroup:35401b27

Isn't that 'page:' a non-aligned address in userspace? It's also weird
that you start dumping out kernel-looking addresses that came from
userspace addresses. Which VM_SPLIT option are you running with, btw?

I'm still pretty stumped, though.