Re: [PATCH 4/4] x86/mm: replace GFP_ATOMIC with GFP_KERNEL for direct map allocations
From: Edgecombe, Rick P
Date: Thu Nov 11 2021 - 16:36:03 EST
On Thu, 2021-11-11 at 13:02 +0200, Mike Rapoport wrote:
> The allocations of the direct map pages are mostly happen very early
> during
> the system boot and they use either the page table cache in brk area
> of bss
> or memblock.
>
> The few callers that effectively use page allocator for the direct
> map
> updates are gart_iommu_init() and memory hotplug. Neither of them
> happen in
> an atomic context so there is no reason to use GFP_ATOMIC for these
> allocations.
There are some other places where these paths could get triggered.
alloc_low_pages() gets called by a bunch of memremap_pages() callers.
spp_getpage() gets called from the set_fixmap() family of functions. I
guess you are saying those should not end up triggering an allocation
post-after_bootmem?
I went ahead and did a search, and found this getting called in a timer
delay:
ghes_poll_func()
spin_lock_irqsave()
ghes_proc()
ghes_read_estatus()
__ghes_read_estatus()
ghes_copy_tofrom_phys()
ghes_map()
__set_fixmap()
...spp_getpage()?
I’m not sure if it’s possible to hit, but potentially it could splat
about not being able to sleep? It would depend on something else not
already mapping the needed fixmap pte, which maybe would never happen.
It seems a little rickety though.
For alloc_low_pages(), I noticed the callers don’t check for allocation
failure. I'm a little surprised that there haven't been reports of the
allocation failing, because these operations could result in a lot more
pages getting allocated way past boot, and failure causes a NULL
pointer dereference.
I checked over the alloc_low_pages() callers and I didn’t see any
problems removing GFP_ATOMIC, but I wonder if it should try harder to
allocate. Or properly check for allocation failure in the callers, to
prevent pre-existing risk of crash. GFP_KERNEL doesn’t look to make it
any worse though, and I guess probably slightly less likely to crash.