Re: [PATCH v3 13/13] mm/huge_memory: add and use has_deposited_pgtable()
From: David Hildenbrand (Arm)
Date: Tue Apr 14 2026 - 05:45:12 EST
On 4/14/26 09:36, Yin Tirui wrote:
> Hi Lorenzo and David,
>
> Sorry for the late reply.
>
> On 4/7/26 18:48, Lorenzo Stoakes wrote:
>> On Thu, Apr 02, 2026 at 03:49:35PM +0800, Yin Tirui wrote:
>>>
>>>
>>>
>>> Hi Lorenzo,
>>>
>>> Thanks for the quick reply. I will definitely CC you on the v4 series.
>>
>> Thanks.
>>
>>>
>>>
>>> Here is the dilemma:
>>>
>>> Currently, VFIO uses vmf_insert_pfn_pmd() to create huge pfnmaps on page
>>> faults. This sets VM_PFNMAP in vfio_pci_core_mmap(), but it does not
>>> deposit a pgtable (unless arch_needs_pgtable_deposit() is true).
>>
>> Hmmm... it's only the VFIO and hyperv drivers using this.
>>
>> Wouldn't we generally want a deposited huge page here now we're allowing huge
>> PFN maps?
>>
>> Or are this _special cases_ where we have a PMD-sized entry but are not
>> necessarily wanting to treat it as THP?
>>
>> This is a real wrinkle in this whole series no?
>>
>> David - any thoughts?
Sorry, catching up with that now.
>>
>>>
>>> To resolve this,
>>>
>>> Option A: Force VFIO (vmf_insert_pfn_pmd) to also deposit pgtables. This
>>> unifies the VM_PFNMAP lifecycle. However, since VFIO can refault,
>>> depositing pgtables here incurs unnecessary memory overhead.
>>
>> How can VFIO refault as a PFN mapping? Does it intentionally sometimes
>> clear PTE entries to effect a refault, and implement a custom fault
>> handler?
>>
>> I guess having a fault handler makes it refaultable...
>>
>> I mean obviously that then contradicts the suggested comment above :)
>>
>> That seems to me to cast a bit of a question over the whole series - having
>> PMD mappings that are _sometimes_ THP and _sometimes_ not is weird (TM).
>>
>> And it'd suck to add - yet another very specific check - to determine if we
>> do, in fact, assume THP for a PMD sized PFN map.
>
> Yes, exactly. VFIO and Hyper-V rely on their custom `.fault` handlers to
> dynamically build mappings. In contrast, `remap_pfn_range()` establishes
> static pre-mappings.
>
>>
>>>
>>> Option B: Introduce a new VMA flag set during remap_pfn_range(), which
>>> we can explicitly check in has_deposited_pgtable().
>>
>> Yeah would rather not, that feels like a hack.
>
> Agreed.
>
>>
>>>
>>> Option C: Check vma->vm_ops->fault (and huge_fault). We would only
>>> deposit pgtables for mappings without fault handlers. However, this is
>>> fragile because a driver might still register a .fault() handler that
>>> simply returns VM_FAULT_SIGBUS.
>>
>> I mean again this is yet another check (TM). But probably the most preferable I
>> think.
>>
>> Wouldn't a driver doing that be being somewhat redundant? E.g. in do_fault();
>>
>> if (!vma->vm_ops->fault) {
>> vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>> vmf->address, &vmf->ptl);
>> if (unlikely(!vmf->pte))
>> ret = VM_FAULT_SIGBUS;
>>
>> And so can expect maybe some more redundancy if they also happen to map
>> PMD-sized ranges? :)
>>
>> And the only two callers of vmf_insert_pfn_pmd() - hyperv and VFIO both
>> implement actual fault handlers anyway.
>>
>> So I think this is fine?
>>
>
> I agree.
>
> David, since Lorenzo also asked for your thoughts on the overall design
> aspect ("sometimes THP and sometimes not"), what is your opinion on
> this? Should we proceed with checking `!vma->vm_ops->fault` to
> differentiate the deposit behavior for huge PFNMAPs?
I mean, we need some indication to know also during folio splitting
whether we can just discard the PMD, as we can refault it later, or
whether we really have to install a PTE table.
What if someone used remap_pfn_range() on some part of the VMA, and
faults on another part?
Doesn't really work.
Do we have users of remap_pfn_range() that have ->fault set? If not, we
should probably just disallow this combination.
Then we know for sure whether something was installed through
remap_pfn_range() or through a fault handler.
--
Cheers,
David