Re: [RFC 1/1] mm/pagewalk: don't split device-backed huge pfnmaps
From: David Hildenbrand (Arm)
Date: Tue Mar 10 2026 - 05:18:30 EST
On 3/10/26 00:02, Boone, Max wrote:
> On Mar 9, 2026 9:19 PM, "David Hildenbrand (Arm)" <david@xxxxxxxxxx> wrote:
>>
>> On 3/9/26 18:49, Max Boone wrote:
>>> Don't split and descend on special PMD/PUDs, which are generally
>>> device-backed huge pfnmaps as used by vfio for BAR mapping. These
>>> can be faulted back in after splitting and before descending, which
>>> can race to an illegal read.
>>>
>>> Signed-off-by: Max Boone <mboone@xxxxxxxxxx>
>>> Signed-off-by: Max Tottenham <mtottenh@xxxxxxxxxx>
>>>
>>> ---
>>> mm/pagewalk.c | 24 ++++++++++++++++++++----
>>> 1 file changed, 20 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
>>> index a94c401ab..d1460dd84 100644
>>> --- a/mm/pagewalk.c
>>> +++ b/mm/pagewalk.c
>>> @@ -147,10 +147,18 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>>> continue;
>>> }
>>>
>>> - if (walk->vma)
>>> + if (walk->vma) {
>>> + /*
>>> + * Don't descend into device-backed pfnmaps,
>>> + * they might refault the PMD entry.
>>> + */
>>> + if (unlikely(pmd_special(*pmd)))
>>> + continue;
>>
>> In general, if you're using pmd_special()/pud_split() and friends in
>> ordinary page table walking code, you are doing something wrong. We
>> don't want to leak these details in such page table walkers.
>
> That sounds sensible, there is a check in the split_huge_pud macro, which previously included DAX memory.
>
> Related to handling that macro, I see another proposed patch for lazy provisioning of PTEs for PMD order THPs [1]. Possibly adding a return code to the split functions allows a better solution here as well?
>
Maybe. I think the behavior of trying to split is ok. We just have
to teach code to deal with races.
Because the very same problem can likely be triggered by having the
splitting/unmapping be triggered from another thread in some other
code path concurrently.
> I'm not sure if making the split (or rather unmap, calling it a split has been a bit confusing to me as it doesn't allocate PMDs) a noop will improve things - as to my understanding it will still try to descend.
>
>> We do have vm_normal_page_pmd() to identify special mappings, but I
>> first have to understand what exactly you are trying to solve here.
>
> Specifically for the page walker, avoid splitting and descending into the PUD-order pfnmaps that VFIO creates for the BAR mappings - as these (can) represent hardware control registers rather than regular memory. I haven't been able to reproduce it with PMD-level pfnmaps, but I'll build a kernel with PUD-level pfnmaps disabled tomorrow.
>
> But if course I'm mainly concerned with fixing the race such that reading numa_maps does not cause an illegal read, resulting in the read process crashing while holding the mmap lock of the process (and subsequent reads of proc freezing, waiting for the mmap lock they'll never get).
Right, that's what we should focus on.
>
>> (You would also be affecting the remapping of the huge zero folio.)
>
> Ah, good one, I do think that this race can occur with PMD-level mappings, looking at the walking & splitting logic - but given the PUD-level mapping triggered the (rare) occurrence I'm fine to focus there first. I guess it helps we don't have 1G THPs, but it would be good to treat 2M and 1G similarly?
I don't think it can happen for PMDs, as pte_offset_map_lock() double-checks
that we really have a page table there. See __pte_offset_map() where we do a
pmdval = pmdp_get_lockless(pmd);
...
if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))
goto nomap;
if (unlikely(pmd_trans_huge(pmdval)))
goto unmap;
...
return __pte_map(&pmdval, addr);
If someone re-faulted the PMD, this function will detect it and reject
walking it as a PMD table.
PMD handling code has to deal with page table removal, so it needs
some extra steps.
For PUD handling we don't need that. Once we spot a PUD table, it's
not going to get yanked underneath our feet.
>
>> A lot more details from the cover letter belong into the patch
>> description. In fact, you don't even need a cover letter :)
>
> Hehe, first timer, still figuring out the process.
:)
>
>> IIUC, this is rather serious and would require a Fixes: and even Cc: stable?
>>
>> I'll spend some time tomorrow trying to understand what the real problem
>> here is.
>
> I think so, the bug can be easily triggered by repeatedly booting up a VM that passes through a PCI device with large BARs while continuously reading the numa_maps of the main VM process. The reproducer script is mainly to narrow down to the specific part where the race occurs, the VFIO DMA set ioctl.
>
> Should I raise a bug email to refer to, and resubmit a new RFC v2 (without the cover letter), or keep discussion in this thread for now?
No, it's okay. Let's first discuss the proper fix.
>
>> But for now: can this only be reproduces with PUDs (which you mention in
>> the cover letter) or also PMDs?
>>
>> For the PMD case I would assume that pte_offset_map_lock() performs
>> proper checks And for the PUD case we are missing a re-check under PTL.
>
> Have only seen it with PUDs, will try forcing the mapping to happen with PMDs tomorrow.
Can you try the following: