[PATCH] arch/x86: memory_hotplug, do not bump up max_pfn for device private pages

From: Balbir Singh
Date: Mon Mar 31 2025 - 20:08:18 EST


Commit 7ffb791423c7 ("x86/kaslr: Reduce KASLR entropy on most x86 systems")
exposed a bug with nokaslr and zone device
interaction, as seen on a system with an AMD iGPU and dGPU (see [1]).
The root cause of the issue is that, the gpu driver registers a zone
device private memory region. When kaslr is disabled or the above commit
is applied, the direct_map_physmem_end is set to much higher than 10 TiB
typically to the 64TiB address. When zone device private memory is added
to the system via add_pages(), it bumps up the max_pfn to the same
value. This causes dma_addressing_limited() to return true, since the
device cannot address memory all the way up to max_pfn.

This caused a regression for games played on the iGPU, as it resulted in
the DMA32 zone being used for GPU allocations.

Fix this by not bumping up max_pfn on x86 systems, when pgmap is passed
into add_pages(). The presence of pgmap is used to determine if device
private memory is being added via add_pages().

More details:

devm_request_mem_region() and request_free_mem_region() request for
device private memory. iomem_resource is passed as the base resource
with start and end parameters. iomem_resource's end depends on several
factors, including the platform and virtualization. On x86 for example
on bare metal, this value is set to boot_cpu_data.x86_phys_bits.
boot_cpu_data.x86_phys_bits can change depending on support for MKTME.
By default it is set to the same as log2(direct_map_physmem_end) which
is 46 to 52 bits depending on the number of levels in the page table.
The allocation routines used iomem_resource's end and
direct_map_physmem_end to figure out where to allocate the region.

arch/powerpc is also impacted by this bug, this patch does not fix
the issue for powerpc.

Testing:
1. Tested on a virtual machine with test_hmm for zone device inseration
2. A previous version of this patch was tested by Bert, please see [2]

Link: https://lore.kernel.org/lkml/20250310112206.4168-1-spasswolf@xxxxxx/ [1]
Link: https://lore.kernel.org/lkml/d87680bab997fdc9fb4e638983132af235d9a03a.camel@xxxxxx/ [2]
Fixes: 7ffb791423c7 ("x86/kaslr: Reduce KASLR entropy on most x86 systems")

Cc: "Christian König" <christian.koenig@xxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Kees Cook <kees@xxxxxxxxxx>
Cc: Bjorn Helgaas <bhelgaas@xxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Alex Deucher <alexander.deucher@xxxxxxx>
Cc: Bert Karwatzki <spasswolf@xxxxxx>
Cc: Madhavan Srinivasan <maddy@xxxxxxxxxxxxx>
Cc: Nicholas Piggin <npiggin@xxxxxxxxx>


Signed-off-by: Balbir Singh <balbirs@xxxxxxxxxx>
---
I've left powerpc out of this regression change due to the time required
to setup and test via qemu. I wanted to address the regression quickly


arch/x86/mm/init_64.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index dce60767124f..cc60b57473a4 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -970,9 +970,18 @@ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
ret = __add_pages(nid, start_pfn, nr_pages, params);
WARN_ON_ONCE(ret);

- /* update max_pfn, max_low_pfn and high_memory */
- update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
- nr_pages << PAGE_SHIFT);
+ /*
+ * add_pages() is called by memremap_pages() for adding device private
+ * pages. Do not bump up max_pfn in the device private path. max_pfn
+ * changes affect dma_addressing_limited. dma_addressing_limited
+ * returning true when max_pfn is the device's addressable memory,
+ * can force device drivers to use bounce buffers and impact their
+ * performance
+ */
+ if (!params->pgmap)
+ /* update max_pfn, max_low_pfn and high_memory */
+ update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
+ nr_pages << PAGE_SHIFT);

return ret;
}
--
2.48.1