Re: [PATCH 2/2] /proc/kpageflags: do not use uninitialized struct pages

From: Michal Hocko
Date: Wed Aug 07 2019 - 09:17:11 EST


On Tue 06-08-19 09:15:25, Dan Williams wrote:
> On Mon, Aug 5, 2019 at 11:47 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> >
> > On Mon 05-08-19 20:27:03, Dan Williams wrote:
> > > On Sun, Aug 4, 2019 at 10:31 PM Toshiki Fukasawa
> > > <t-fukasawa@xxxxxxxxxxxxx> wrote:
> > > >
> > > > On 2019/07/26 16:06, Michal Hocko wrote:
> > > > > On Fri 26-07-19 06:25:49, Toshiki Fukasawa wrote:
> > > > >>
> > > > >>
> > > > >> On 2019/07/25 18:03, Michal Hocko wrote:
> > > > >>> On Thu 25-07-19 02:31:18, Toshiki Fukasawa wrote:
> > > > >>>> A kernel panic was observed during reading /proc/kpageflags for
> > > > >>>> first few pfns allocated by pmem namespace:
> > > > >>>>
> > > > >>>> BUG: unable to handle page fault for address: fffffffffffffffe
> > > > >>>> [ 114.495280] #PF: supervisor read access in kernel mode
> > > > >>>> [ 114.495738] #PF: error_code(0x0000) - not-present page
> > > > >>>> [ 114.496203] PGD 17120e067 P4D 17120e067 PUD 171210067 PMD 0
> > > > >>>> [ 114.496713] Oops: 0000 [#1] SMP PTI
> > > > >>>> [ 114.497037] CPU: 9 PID: 1202 Comm: page-types Not tainted 5.3.0-rc1 #1
> > > > >>>> [ 114.497621] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
> > > > >>>> [ 114.498706] RIP: 0010:stable_page_flags+0x27/0x3f0
> > > > >>>> [ 114.499142] Code: 82 66 90 66 66 66 66 90 48 85 ff 0f 84 d1 03 00 00 41 54 55 48 89 fd 53 48 8b 57 08 48 8b 1f 48 8d 42 ff 83 e2 01 48 0f 44 c7 <48> 8b 00 f6 c4 02 0f 84 57 03 00 00 45 31 e4 48 8b 55 08 48 89 ef
> > > > >>>> [ 114.500788] RSP: 0018:ffffa5e601a0fe60 EFLAGS: 00010202
> > > > >>>> [ 114.501373] RAX: fffffffffffffffe RBX: ffffffffffffffff RCX: 0000000000000000
> > > > >>>> [ 114.502009] RDX: 0000000000000001 RSI: 00007ffca13a7310 RDI: ffffd07489000000
> > > > >>>> [ 114.502637] RBP: ffffd07489000000 R08: 0000000000000001 R09: 0000000000000000
> > > > >>>> [ 114.503270] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000240000
> > > > >>>> [ 114.503896] R13: 0000000000080000 R14: 00007ffca13a7310 R15: ffffa5e601a0ff08
> > > > >>>> [ 114.504530] FS: 00007f0266c7f540(0000) GS:ffff962dbbac0000(0000) knlGS:0000000000000000
> > > > >>>> [ 114.505245] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > > >>>> [ 114.505754] CR2: fffffffffffffffe CR3: 000000023a204000 CR4: 00000000000006e0
> > > > >>>> [ 114.506401] Call Trace:
> > > > >>>> [ 114.506660] kpageflags_read+0xb1/0x130
> > > > >>>> [ 114.507051] proc_reg_read+0x39/0x60
> > > > >>>> [ 114.507387] vfs_read+0x8a/0x140
> > > > >>>> [ 114.507686] ksys_pread64+0x61/0xa0
> > > > >>>> [ 114.508021] do_syscall_64+0x5f/0x1a0
> > > > >>>> [ 114.508372] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > > > >>>> [ 114.508844] RIP: 0033:0x7f0266ba426b
> > > > >>>>
> > > > >>>> The reason for the panic is that stable_page_flags() which parses
> > > > >>>> the page flags uses uninitialized struct pages reserved by the
> > > > >>>> ZONE_DEVICE driver.
> > > > >>>
> > > > >>> Why pmem hasn't initialized struct pages?
> > > > >>
> > > > >> We proposed to initialize in previous approach but that wasn't merged.
> > > > >> (See https://marc.info/?l=linux-mm&m=152964792500739&w=2)
> > > > >>
> > > > >>> Isn't that a bug that should be addressed rather than paper over it like this?
> > > > >>
> > > > >> I'm not sure. What do you think, Dan?
> > > > >
> > > > > Yeah, I am really curious about details. Why do we keep uninitialized
> > > > > struct pages at all? What is a random pfn walker supposed to do? What
> > > > > kind of metadata would be clobbered? In other words much more details
> > > > > please.
> > > > >
> > > > I also want to know. I do not think that initializing struct pages will
> > > > clobber any metadata.
> > >
> > > The nvdimm implementation uses vmem_altmap to arrange for the 'struct
> > > page' array to be allocated from a reservation of a pmem namespace. A
> > > namespace in this mode contains an info-block that consumes the first
> > > 8K of the namespace capacity, capacity designated for page mapping,
> > > capacity for padding the start of data to optionally 4K, 2MB, or 1GB
> > > (on x86), and then the namespace data itself. The implementation
> > > specifies a section aligned (now sub-section aligned) address to
> > > arch_add_memory() to establish the linear mapping to map the metadata,
> > > and then vmem_altmap indicates to memmap_init_zone() which pfns
> > > represent data. The implementation only specifies enough 'struct page'
> > > capacity for pfn_to_page() to operate on the data space, not the
> > > namespace metadata space.
> >
> > Maybe I am dense but I do not really understand what prevents those
> > struct pages to be initialized to whatever state nvidimm subsystem
> > expects them to be? Is that a initialization speed up optimization?
>
> No, not an optimization. If anything a regrettable choice in the
> initial implementation to not reserve struct page space for the
> metadata area. Certainly the kernel could fix this going forward, and
> there are some configurations where even the existing allocation could
> store those pfns, but there are others that need that reservation. So
> there is a regression risk for some currently working configurations.
>
> As always we could try making the reservation change and fail to
> instantiate old namespaces that don't reserve enough capacity to see
> who screams. I think the risk is low, but non-zero. That makes my
> first choice to teach kpageflags_read() about the constraint.

Thanks for the explanation!

> > > The proposal to validate ZONE_DEVICE pfns against the altmap seems the
> > > right approach to me.
> >
> > This however means that all pfn walkers have to be aware of these
> > special struct pages somehow and that is error prone.
>
> True, but what other blind pfn walkers do we have besides
> kpageflags_read()? I expect most other pfn_to_page() code paths are
> constrained to known pfns and avoid this surprise, but yes I need to
> go audit those.

Well, most pfn walkers in the MM code do go within a zone boundary. Many
check also the zone to ensure interleaving zones are handled properly. I
hope that these special zone device ranges are not going to interleave
with other normal zones. But as always having a subtle land mine like
this is really not nice. All valid pfns should have a real and
initialized struct pages.

--
Michal Hocko
SUSE Labs