Re: [PATCH v3 1/3] mm: memfd/hugetlb: introduce memfd-based userspace MFR policy

From: Miaohe Lin

Date: Tue Feb 10 2026 - 02:31:44 EST


On 2026/2/10 12:47, Jiaqi Yan wrote:
> On Mon, Feb 9, 2026 at 3:54 AM Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>
>> On 2026/2/4 3:23, Jiaqi Yan wrote:
>>> Sometimes immediately hard offlining a large chunk of contigous memory
>>> having uncorrected memory errors (UE) may not be the best option.
>>> Cloud providers usually serve capacity- and performance-critical guest
>>> memory with 1G HugeTLB hugepages, as this significantly reduces the
>>> overhead associated with managing page tables and TLB misses. However,
>>> for today's HugeTLB system, once a byte of memory in a hugepage is
>>> hardware corrupted, the kernel discards the whole hugepage, including
>>> the healthy portion. Customer workload running in the VM can hardly
>>> recover from such a great loss of memory.
>>
>> Thanks for your patch. Some questions below.
>>
>>>
>>> Therefore keeping or discarding a large chunk of contiguous memory
>>> owned by userspace (particularly to serve guest memory) due to
>>> recoverable UE may better be controlled by userspace process
>>> that owns the memory, e.g. VMM in the Cloud environment.
>>>
>>> Introduce a memfd-based userspace memory failure (MFR) policy,
>>> MFD_MF_KEEP_UE_MAPPED. It is possible to support for other memfd,
>>> but the current implementation only covers HugeTLB.
>>>
>>> For a hugepage associated with MFD_MF_KEEP_UE_MAPPED enabled memfd,
>>> whenever it runs into a new UE,
>>>
>>> * MFR defers hard offline operations, i.e., unmapping and
>>
>> So the folio can't be unpoisoned until hugetlb folio becomes free?
>
> Are you asking from testing perspective, are we still able to clean up
> injected test errors via unpoison_memory() with MFD_MF_KEEP_UE_MAPPED?
>
> If so, unpoison_memory() can't turn the HWPoison hugetlb page to
> normal hugetlb page as MFD_MF_KEEP_UE_MAPPED automatically dissolves

We might loss some testability but that should be an acceptable compromise.

> it. unpoison_memory(pfn) can probably still turn the HWPoison raw page
> back to a normal one, but you already lost the hugetlb page.
>
>>
>>> dissolving. MFR still sets HWPoison flag, holds a refcount
>>> for every raw HWPoison page, record them in a list, sends SIGBUS
>>> to the consuming thread, but si_addr_lsb is reduced to PAGE_SHIFT.
>>> If userspace is able to handle the SIGBUS, the HWPoison hugepage
>>> remains accessible via the mapping created with that memfd.
>>>
>>> * If the memory was not faulted in yet, the fault handler also
>>> allows fault in the HWPoison folio.
>>>
>>> For a MFD_MF_KEEP_UE_MAPPED enabled memfd, when it is closed, or
>>> when userspace process truncates its hugepages:
>>>
>>> * When the HugeTLB in-memory file system removes the filemap's
>>> folios one by one, it asks MFR to deal with HWPoison folios
>>> on the fly, implemented by filemap_offline_hwpoison_folio().
>>>
>>> * MFR drops the refcounts being held for the raw HWPoison
>>> pages within the folio. Now that the HWPoison folio becomes
>>> free, MFR dissolves it into a set of raw pages. The healthy pages
>>> are recycled into buddy allocator, while the HWPoison ones are
>>> prevented from re-allocation.
>>>
>> ...
>>
>>>
>>> +static void filemap_offline_hwpoison_folio_hugetlb(struct folio *folio)
>>> +{
>>> + int ret;
>>> + struct llist_node *head;
>>> + struct raw_hwp_page *curr, *next;
>>> +
>>> + /*
>>> + * Since folio is still in the folio_batch, drop the refcount
>>> + * elevated by filemap_get_folios.
>>> + */
>>> + folio_put_refs(folio, 1);
>>> + head = llist_del_all(raw_hwp_list_head(folio));
>>
>> We might race with get_huge_page_for_hwpoison()? llist_add() might be called
>> by folio_set_hugetlb_hwpoison() just after llist_del_all()?
>
> Oh, when there is a new UE while we releasing the folio here, right?

Right.

> In that case, would mutex_lock(&mf_mutex) eliminate potential race?

IMO spin_lock_irq(&hugetlb_lock) might be better.

>
>>
>>> +
>>> + /*
>>> + * Release refcounts held by try_memory_failure_hugetlb, one per
>>> + * HWPoison-ed page in the raw hwp list.
>>> + *
>>> + * Set HWPoison flag on each page so that free_has_hwpoisoned()
>>> + * can exclude them during dissolve_free_hugetlb_folio().
>>> + */
>>> + llist_for_each_entry_safe(curr, next, head, node) {
>>> + folio_put(folio);
>>
>> The hugetlb folio refcnt will only be increased once even if it contains multiple UE sub-pages.
>> See __get_huge_page_for_hwpoison() for details. So folio_put() might be called more times than
>> folio_try_get() in __get_huge_page_for_hwpoison().
>
> The changes in folio_set_hugetlb_hwpoison() should make
> __get_huge_page_for_hwpoison() not to take the "out" path which
> decrease the increased refcount for folio. IOW, every time a new UE
> happens, we handle the hugetlb page as if it is an in-use hugetlb
> page.

See below code snippet (comment [1] and [2]):

int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
bool *migratable_cleared)
{
struct page *page = pfn_to_page(pfn);
struct folio *folio = page_folio(page);
int ret = 2; /* fallback to normal page handling */
bool count_increased = false;

if (!folio_test_hugetlb(folio))
goto out;

if (flags & MF_COUNT_INCREASED) {
ret = 1;
count_increased = true;
} else if (folio_test_hugetlb_freed(folio)) {
ret = 0;
} else if (folio_test_hugetlb_migratable(folio)) {

^^^^*hugetlb_migratable is checked before trying to get folio refcnt* [1]

ret = folio_try_get(folio);
if (ret)
count_increased = true;
} else {
ret = -EBUSY;
if (!(flags & MF_NO_RETRY))
goto out;
}

if (folio_set_hugetlb_hwpoison(folio, page)) {
ret = -EHWPOISON;
goto out;
}

/*
* Clearing hugetlb_migratable for hwpoisoned hugepages to prevent them
* from being migrated by memory hotremove.
*/
if (count_increased && folio_test_hugetlb_migratable(folio)) {
folio_clear_hugetlb_migratable(folio);

^^^^^*hugetlb_migratable is cleared when first time seeing folio* [2]

*migratable_cleared = true;
}

Or am I miss something?

>
>>
>>> + SetPageHWPoison(curr->page);
>>
>> If hugetlb folio vmemmap is optimized, I think SetPageHWPoison might trigger BUG.
>
> Ah, I see, vmemmap optimization doesn't allow us to move flags from
> raw_hwp_list to tail pages. I guess the best I can do is to bail out
> if vmemmap is enabled like folio_clear_hugetlb_hwpoison().

I think you can do this after hugetlb_vmemmap_restore_folio() is called.

Thanks.
.