Re: [RFC 1/1] mm/pagewalk: don't split device-backed huge pfnmaps

From: Boone, Max

Date: Tue Mar 10 2026 - 09:16:09 EST




> On Mar 10, 2026, at 10:11 AM, David Hildenbrand (Arm) <david@xxxxxxxxxx> wrote:
>
> !-------------------------------------------------------------------|
> This Message Is From an External Sender
> This message came from outside your organization.
> |-------------------------------------------------------------------!
>
> On 3/10/26 00:02, Boone, Max wrote:
>> On Mar 9, 2026 9:19 PM, "David Hildenbrand (Arm)" <david@xxxxxxxxxx> wrote:
>>>
>>> On 3/9/26 18:49, Max Boone wrote:
>>>> Don't split and descend on special PMD/PUDs, which are generally
>>>> device-backed huge pfnmaps as used by vfio for BAR mapping. These
>>>> can be faulted back in after splitting and before descending, which
>>>> can race to an illegal read.
>>>>
>>>> Signed-off-by: Max Boone <mboone@xxxxxxxxxx>
>>>> Signed-off-by: Max Tottenham <mtottenh@xxxxxxxxxx>
>>>>
>>>> ---
>>>> mm/pagewalk.c | 24 ++++++++++++++++++++----
>>>> 1 file changed, 20 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
>>>> index a94c401ab..d1460dd84 100644
>>>> --- a/mm/pagewalk.c
>>>> +++ b/mm/pagewalk.c
>>>> @@ -147,10 +147,18 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>>>> continue;
>>>> }
>>>>
>>>> - if (walk->vma)
>>>> + if (walk->vma) {
>>>> + /*
>>>> + * Don't descend into device-backed pfnmaps,
>>>> + * they might refault the PMD entry.
>>>> + */
>>>> + if (unlikely(pmd_special(*pmd)))
>>>> + continue;
>>>
>>> In general, if you're using pmd_special()/pud_split() and friends in
>>> ordinary page table walking code, you are doing something wrong. We
>>> don't want to leak these details in such page table walkers.
>>
>> That sounds sensible, there is a check in the split_huge_pud macro, which previously included DAX memory.
>>
>> Related to handling that macro, I see another proposed patch for lazy provisioning of PTEs for PMD order THPs [1]. Possibly adding a return code to the split functions allows a better solution here as well?
>>
>
> Maybe. I think the behavior of trying to split is ok. We just have
> to teach code to deal with races.
>
> Because the very same problem can likely be triggered by having the
> splitting/unmapping be triggered from another thread in some other
> code path concurrently.

I was previously testing on 6.12 and didn’t see any changes to vfio-pci or
pagewalk.c which prompted me to check whether I could reproduce the
bug in a more recent kernel.

However, when I tried to reproduce the bug on 7.0-rc2 (after adding some
tracing to get a clearer picture of the sequence of events) it doesn’t happen.
The VFIO DMA set operation is much faster on 7.0, so possibly the race
window is too small for it to occur in reasonable time.

>
>> I'm not sure if making the split (or rather unmap, calling it a split has been a bit confusing to me as it doesn't allocate PMDs) a noop will improve things - as to my understanding it will still try to descend.
>>
>>> We do have vm_normal_page_pmd() to identify special mappings, but I
>>> first have to understand what exactly you are trying to solve here.
>>
>> Specifically for the page walker, avoid splitting and descending into the PUD-order pfnmaps that VFIO creates for the BAR mappings - as these (can) represent hardware control registers rather than regular memory. I haven't been able to reproduce it with PMD-level pfnmaps, but I'll build a kernel with PUD-level pfnmaps disabled tomorrow.
>>
>> But if course I'm mainly concerned with fixing the race such that reading numa_maps does not cause an illegal read, resulting in the read process crashing while holding the mmap lock of the process (and subsequent reads of proc freezing, waiting for the mmap lock they'll never get).
>
> Right, that's what we should focus on.
>
>>
>>> (You would also be affecting the remapping of the huge zero folio.)
>>
>> Ah, good one, I do think that this race can occur with PMD-level mappings, looking at the walking & splitting logic - but given the PUD-level mapping triggered the (rare) occurrence I'm fine to focus there first. I guess it helps we don't have 1G THPs, but it would be good to treat 2M and 1G similarly?
>
> I don't think it can happen for PMDs, as pte_offset_map_lock() double-checks
> that we really have a page table there. See __pte_offset_map() where we do a
>
> pmdval = pmdp_get_lockless(pmd);
> ...
> if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))
> goto nomap;
> if (unlikely(pmd_trans_huge(pmdval)))
> goto unmap;
> ...
> return __pte_map(&pmdval, addr);
>
>
> If someone re-faulted the PMD, this function will detect it and reject
> walking it as a PMD table.
>
> PMD handling code has to deal with page table removal, so it needs
> some extra steps.
>
> For PUD handling we don't need that. Once we spot a PUD table, it's
> not going to get yanked underneath our feet.

I have indeed not been able to reproduce the bug with PMDs, sounds like
that’s not an issue.

>
>>
>>> A lot more details from the cover letter belong into the patch
>>> description. In fact, you don't even need a cover letter :)
>>
>> Hehe, first timer, still figuring out the process.
>
> :)
>
>>
>>> IIUC, this is rather serious and would require a Fixes: and even Cc: stable?
>>>
>>> I'll spend some time tomorrow trying to understand what the real problem
>>> here is.
>>
>> I think so, the bug can be easily triggered by repeatedly booting up a VM that passes through a PCI device with large BARs while continuously reading the numa_maps of the main VM process. The reproducer script is mainly to narrow down to the specific part where the race occurs, the VFIO DMA set ioctl.
>>
>> Should I raise a bug email to refer to, and resubmit a new RFC v2 (without the cover letter), or keep discussion in this thread for now?
>
> No, it's okay. Let's first discuss the proper fix.
>
>>
>>> But for now: can this only be reproduces with PUDs (which you mention in
>>> the cover letter) or also PMDs?
>>>
>>> For the PMD case I would assume that pte_offset_map_lock() performs
>>> proper checks And for the PUD case we are missing a re-check under PTL.
>>
>> Have only seen it with PUDs, will try forcing the mapping to happen with PMDs tomorrow.
>
> Can you try the following:
>
>
> From b3f0a85b9f071e338097147f997f20d1ac796155 Mon Sep 17 00:00:00 2001
> From: "David Hildenbrand (Arm)" <david@xxxxxxxxxx>
> Date: Tue, 10 Mar 2026 10:09:39 +0100
> Subject: [PATCH] tmp
>
> Signed-off-by: David Hildenbrand (Arm) <david@xxxxxxxxxx>
> ---
> mm/pagewalk.c | 22 ++++++++++++++++++----
> 1 file changed, 18 insertions(+), 4 deletions(-)
>
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index cb358558807c..779f6fa00ab7 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -96,6 +96,7 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
> struct mm_walk *walk)
> {
> + pud_t pudval = pudp_get(pud);
> pmd_t *pmd;
> unsigned long next;
> const struct mm_walk_ops *ops = walk->ops;
> @@ -104,6 +105,18 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
> int err = 0;
> int depth = real_depth(3);
>
> + /*
> + * For PTE handling, pte_offset_map_lock() takes care of checking
> + * whether there actually is a page table. But it also has to be
> + * very careful about concurrent page table reclaim. If we spot a PMD
> + * table, it cannot go away, so we can just walk it. However, if we find
> + * something else, we have to retry.
> + */
> + if (!pud_present(pudval) || pud_leaf(pudval)) {
> + walk->action = ACTION_AGAIN;
> + return 0;
> + }
> +
> pmd = pmd_offset(pud, addr);
> do {
> again:
> @@ -176,7 +189,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>
> pud = pud_offset(p4d, addr);
> do {
> - again:
> +again:
> next = pud_addr_end(addr, end);
> if (pud_none(*pud)) {
> if (has_install)
> @@ -217,12 +230,13 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
> else if (pud_leaf(*pud) || !pud_present(*pud))
> continue; /* Nothing to do. */
>
> - if (pud_none(*pud))
> - goto again;
> -
> err = walk_pmd_range(pud, addr, next, walk);
> if (err)
> break;
> +
> + if (walk->action == ACTION_AGAIN)
> + goto again;
> +
> } while (pud++, addr = next, addr != end);
>
> return err;
> --
> 2.43.0

That works, awesome!

interestingly enough the VFIO ioctl now also returns “[Errno 22] Invalid argument” where
I would previously see the process reading numa_maps crash.

[dma_map]
dma_map iova=0x000000000000 size=0x000004000000 vaddr=0x00007f7800000000
dma_map FAILED iova=0x020000000000: [Errno 22] Invalid argument
dma_map iova=0x040000000000 size=0x000002000000 vaddr=0x00007f5780000000

For my own understanding, why is this patch preferred over:
- if (pud_none(*pud))
+ if (pud_none(*pud) || pud_leaf(*pud))
in the walk_pud_range function?

I do think moving the check to walk_pmd_range is a more clear on the code’s intent and
personally prefer the code there, but I don’t see why this check is removing the possibility
of a race after the (!pud_present(pudval) || pud_leaf(pudval)) check, as to me it looks
like the PMD entry was possible to disappear between the splitting and this check?

Anyways, regardless, this patch resolves the bug and looks good to me - what’s the
course of action as we probably want to backport this to earlier kernels as well. Shall
I send in a new PATCH without cover letter and take it from there?

>
>
> --
> Cheers,
>
> David


Attachment: smime.p7s
Description: S/MIME cryptographic signature