Re: [PATCH v3 1/3] mm: memfd/hugetlb: introduce memfd-based userspace MFR policy

From: Miaohe Lin

Date: Mon Mar 09 2026 - 22:25:48 EST


On 2026/3/9 23:47, Jiaqi Yan wrote:
> On Mon, Mar 9, 2026 at 12:41 AM Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>
>> On 2026/3/9 12:53, Jiaqi Yan wrote:
>>> On Mon, Feb 23, 2026 at 11:30 PM Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>>>
>>>> On 2026/2/13 13:01, Jiaqi Yan wrote:
>>>>> On Mon, Feb 9, 2026 at 11:31 PM Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>>>>>
>>>>>> On 2026/2/10 12:47, Jiaqi Yan wrote:
>>>>>>> On Mon, Feb 9, 2026 at 3:54 AM Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>>>>>>>
>>>>>>>> On 2026/2/4 3:23, Jiaqi Yan wrote:
>>>>>>>>> Sometimes immediately hard offlining a large chunk of contigous memory
>>>>>>>>> having uncorrected memory errors (UE) may not be the best option.
>>>>>>>>> Cloud providers usually serve capacity- and performance-critical guest
>>>>>>>>> memory with 1G HugeTLB hugepages, as this significantly reduces the
>>>>>>>>> overhead associated with managing page tables and TLB misses. However,
>>>>>>>>> for today's HugeTLB system, once a byte of memory in a hugepage is
>>>>>>>>> hardware corrupted, the kernel discards the whole hugepage, including
>>>>>>>>> the healthy portion. Customer workload running in the VM can hardly
>>>>>>>>> recover from such a great loss of memory.
>>>>>>>>
>>>>>>>> Thanks for your patch. Some questions below.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Therefore keeping or discarding a large chunk of contiguous memory
>>>>>>>>> owned by userspace (particularly to serve guest memory) due to
>>>>>>>>> recoverable UE may better be controlled by userspace process
>>>>>>>>> that owns the memory, e.g. VMM in the Cloud environment.
>>>>>>>>>
>>>>>>>>> Introduce a memfd-based userspace memory failure (MFR) policy,
>>>>>>>>> MFD_MF_KEEP_UE_MAPPED. It is possible to support for other memfd,
>>>>>>>>> but the current implementation only covers HugeTLB.
>>>>>>>>>
>>>>>>>>> For a hugepage associated with MFD_MF_KEEP_UE_MAPPED enabled memfd,
>>>>>>>>> whenever it runs into a new UE,
>>>>>>>>>
>>>>>>>>> * MFR defers hard offline operations, i.e., unmapping and
>>>>>>>>
>>>>>>>> So the folio can't be unpoisoned until hugetlb folio becomes free?
>>>>>>>
>>>>>>> Are you asking from testing perspective, are we still able to clean up
>>>>>>> injected test errors via unpoison_memory() with MFD_MF_KEEP_UE_MAPPED?
>>>>>>>
>>>>>>> If so, unpoison_memory() can't turn the HWPoison hugetlb page to
>>>>>>> normal hugetlb page as MFD_MF_KEEP_UE_MAPPED automatically dissolves
>>>>>>
>>>>>> We might loss some testability but that should be an acceptable compromise.
>>>>>
>>>>> To clarify, looking at unpoison_memory(), it seems unpoison should
>>>>> still work if called before truncated or memfd closed.
>>>>>
>>>>> What I wanted to say is, for my test hugetlb-mfr.c, since I really
>>>>> want to test the cleanup code (dissolving free hugepage having
>>>>> multiple errors) after truncation or memfd closed, so we can only
>>>>> unpoison the raw pages rejected by buddy allocator.
>>>>>
>>>>>>
>>>>>>> it. unpoison_memory(pfn) can probably still turn the HWPoison raw page
>>>>>>> back to a normal one, but you already lost the hugetlb page.
>>>>>>>
>>>>>>>>
>>>>>>>>> dissolving. MFR still sets HWPoison flag, holds a refcount
>>>>>>>>> for every raw HWPoison page, record them in a list, sends SIGBUS
>>>>>>>>> to the consuming thread, but si_addr_lsb is reduced to PAGE_SHIFT.
>>>>>>>>> If userspace is able to handle the SIGBUS, the HWPoison hugepage
>>>>>>>>> remains accessible via the mapping created with that memfd.
>>>>>>>>>
>>>>>>>>> * If the memory was not faulted in yet, the fault handler also
>>>>>>>>> allows fault in the HWPoison folio.
>>>>>>>>>
>>>>>>>>> For a MFD_MF_KEEP_UE_MAPPED enabled memfd, when it is closed, or
>>>>>>>>> when userspace process truncates its hugepages:
>>>>>>>>>
>>>>>>>>> * When the HugeTLB in-memory file system removes the filemap's
>>>>>>>>> folios one by one, it asks MFR to deal with HWPoison folios
>>>>>>>>> on the fly, implemented by filemap_offline_hwpoison_folio().
>>>>>>>>>
>>>>>>>>> * MFR drops the refcounts being held for the raw HWPoison
>>>>>>>>> pages within the folio. Now that the HWPoison folio becomes
>>>>>>>>> free, MFR dissolves it into a set of raw pages. The healthy pages
>>>>>>>>> are recycled into buddy allocator, while the HWPoison ones are
>>>>>>>>> prevented from re-allocation.
>>>>>>>>>
>>>>>>>> ...
>>>>>>>>
>>>>>>>>>
>>>>>>>>> +static void filemap_offline_hwpoison_folio_hugetlb(struct folio *folio)
>>>>>>>>> +{
>>>>>>>>> + int ret;
>>>>>>>>> + struct llist_node *head;
>>>>>>>>> + struct raw_hwp_page *curr, *next;
>>>>>>>>> +
>>>>>>>>> + /*
>>>>>>>>> + * Since folio is still in the folio_batch, drop the refcount
>>>>>>>>> + * elevated by filemap_get_folios.
>>>>>>>>> + */
>>>>>>>>> + folio_put_refs(folio, 1);
>>>>>>>>> + head = llist_del_all(raw_hwp_list_head(folio));
>>>>>>>>
>>>>>>>> We might race with get_huge_page_for_hwpoison()? llist_add() might be called
>>>>>>>> by folio_set_hugetlb_hwpoison() just after llist_del_all()?
>>>>>>>
>>>>>>> Oh, when there is a new UE while we releasing the folio here, right?
>>>>>>
>>>>>> Right.
>>>>>>
>>>>>>> In that case, would mutex_lock(&mf_mutex) eliminate potential race?
>>>>>>
>>>>>> IMO spin_lock_irq(&hugetlb_lock) might be better.
>>>>>
>>>>> Looks like I don't need any lock given the correction below.
>>>>>
>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>> +
>>>>>>>>> + /*
>>>>>>>>> + * Release refcounts held by try_memory_failure_hugetlb, one per
>>>>>>>>> + * HWPoison-ed page in the raw hwp list.
>>>>>>>>> + *
>>>>>>>>> + * Set HWPoison flag on each page so that free_has_hwpoisoned()
>>>>>>>>> + * can exclude them during dissolve_free_hugetlb_folio().
>>>>>>>>> + */
>>>>>>>>> + llist_for_each_entry_safe(curr, next, head, node) {
>>>>>>>>> + folio_put(folio);
>>>>>>>>
>>>>>>>> The hugetlb folio refcnt will only be increased once even if it contains multiple UE sub-pages.
>>>>>>>> See __get_huge_page_for_hwpoison() for details. So folio_put() might be called more times than
>>>>>>>> folio_try_get() in __get_huge_page_for_hwpoison().
>>>>>>>
>>>>>>> The changes in folio_set_hugetlb_hwpoison() should make
>>>>>>> __get_huge_page_for_hwpoison() not to take the "out" path which
>>>>>>> decrease the increased refcount for folio. IOW, every time a new UE
>>>>>>> happens, we handle the hugetlb page as if it is an in-use hugetlb
>>>>>>> page.
>>>>>>
>>>>>> See below code snippet (comment [1] and [2]):
>>>>>>
>>>>>> int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
>>>>>> bool *migratable_cleared)
>>>>>> {
>>>>>> struct page *page = pfn_to_page(pfn);
>>>>>> struct folio *folio = page_folio(page);
>>>>>> int ret = 2; /* fallback to normal page handling */
>>>>>> bool count_increased = false;
>>>>>>
>>>>>> if (!folio_test_hugetlb(folio))
>>>>>> goto out;
>>>>>>
>>>>>> if (flags & MF_COUNT_INCREASED) {
>>>>>> ret = 1;
>>>>>> count_increased = true;
>>>>>> } else if (folio_test_hugetlb_freed(folio)) {
>>>>>> ret = 0;
>>>>>> } else if (folio_test_hugetlb_migratable(folio)) {
>>>>>>
>>>>>> ^^^^*hugetlb_migratable is checked before trying to get folio refcnt* [1]
>>>>>>
>>>>>> ret = folio_try_get(folio);
>>>>>> if (ret)
>>>>>> count_increased = true;
>>>>>> } else {
>>>>>> ret = -EBUSY;
>>>>>> if (!(flags & MF_NO_RETRY))
>>>>>> goto out;
>>>>>> }
>>>>>>
>>>>>> if (folio_set_hugetlb_hwpoison(folio, page)) {
>>>>>> ret = -EHWPOISON;
>>>>>> goto out;
>>>>>> }
>>>>>>
>>>>>> /*
>>>>>> * Clearing hugetlb_migratable for hwpoisoned hugepages to prevent them
>>>>>> * from being migrated by memory hotremove.
>>>>>> */
>>>>>> if (count_increased && folio_test_hugetlb_migratable(folio)) {
>>>>>> folio_clear_hugetlb_migratable(folio);
>>>>>>
>>>>>> ^^^^^*hugetlb_migratable is cleared when first time seeing folio* [2]
>>>>>>
>>>>>> *migratable_cleared = true;
>>>>>> }
>>>>>>
>>>>>> Or am I miss something?
>>>>>
>>>>> Thanks for your explaination! You are absolutely right. It turns out
>>>>> the extra refcount I saw (during running hugetlb-mfr.c) on the folio
>>>>> at the moment of filemap_offline_hwpoison_folio_hugetlb() is actually
>>>>> because of the MF_COUNT_INCREASED during MADV_HWPOISON. In the past I
>>>>> used to think that is the effect of folio_try_get() in
>>>>> __get_huge_page_for_hwpoison(), and it is wrong. Now I see two cases:
>>>>> - MADV_HWPOISON: instead of __get_huge_page_for_hwpoison(),
>>>>> madvise_inject_error() is the one that increments hugepage refcount
>>>>> for every error injected. Different from other cases,
>>>>> MFD_MF_KEEP_UE_MAPPED makes the hugepage still a in-use page after
>>>>> memory_failure(MF_COUNT_INCREASED), so I think madvise_inject_error()
>>>>> should decrement in MFD_MF_KEEP_UE_MAPPED case.
>>>>> - In the real world: as you pointed out, MF always just increments
>>>>> hugepage refcount once in __get_huge_page_for_hwpoison(), even if it
>>>>> runs into multiple errors. When
>>>>
>>>> This might not always hold true. When MF occurs while hugetlb folio is under isolation(hugetlb_migratable is
>>>> cleared and extra folio refcnt is held by isolating code in that case), __get_huge_page_for_hwpoison won't get
>>>> extra folio refcnt.
>>>>
>>>>> filemap_offline_hwpoison_folio_hugetlb() drops the refcount elevated
>>>>> by filemap_get_folios(), it only needs to decrement again if
>>>>> folio_ref_dec_and_test() returns false. I tested something like below:
>>>>>
>>>>> /* drop the refcount elevated by filemap_get_folios. */
>>>>> folio_put(folio);
>>>>> if (folio_ref_count(folio))
>>>>> folio_put(folio);
>>>>> /* now refcount should be zero. */
>>>>> ret = dissolve_free_hugetlb_folio(folio);
>>>>
>>>> So I think above code might drop the folio refcnt held by isolating code.
>>>
>>> Hi Miaohe, thanks for raising the concern. Given two things below
>>> - both folio_isolate_hugetlb() and get_huge_page_for_hwpoison() are
>>> guarded by hugetlb_lock.
>>> - hugetlb_update_hwpoison() only folio_test_set_hwpoison() for
>>> non-isolated folio after folio_try_get() succeeds.
>>>
>>> as long as folio_test_set_hwpoison() is true here, this refcount
>>> should never come from folio_isolate_hugetlb(). What do you think?
>>>
>>
>> Let's think about below scenario. When __get_huge_page_for_hwpoison() encounters an
>> isolated hugetlb folio:
>>
>> int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
>> bool *migratable_cleared)
>> {
>> struct page *page = pfn_to_page(pfn);
>> struct folio *folio = page_folio(page);
>> bool count_increased = false;
>> int ret, rc;
>>
>> if (!folio_test_hugetlb(folio)) {
>> ret = MF_HUGETLB_NON_HUGEPAGE;
>> goto out;
>> } else if (flags & MF_COUNT_INCREASED) {
>> ret = MF_HUGETLB_IN_USED;
>> count_increased = true;
>> } else if (folio_test_hugetlb_freed(folio)) {
>> ret = MF_HUGETLB_FREED;
>> } else if (folio_test_hugetlb_migratable(folio)) {
>>
>> ^^^^*Since hugetlb_migratable is cleared for the isolated hugetlb folio*
>>
>> if (folio_try_get(folio)) {
>> ret = MF_HUGETLB_IN_USED;
>> count_increased = true;
>> } else {
>> ret = MF_HUGETLB_FREED;
>> }
>> } else {
>>
>> ^^^^*Code will reach here without extra refcnt increased*
>>
>> ret = MF_HUGETLB_RETRY;
>> if (!(flags & MF_NO_RETRY))
>> goto out;
>> }
>>
>> *Code will reach here after retry*
>
> You are right, thanks for pointing that out. Let me think about more
> how to handle this.
>
>> rc = hugetlb_update_hwpoison(folio, page);
>> if (rc >= MF_HUGETLB_FOLIO_PRE_POISONED) {
>> ret = rc;
>> goto out;
>> }
>>
>> So hugetlb_update_hwpoison() will be called even for folio under isolation
>> without folio_try_get(). Or am I miss something?
>
> Just a random question: if MF never increments a hugepage's refcount,

MF will hold hugetlb folio's refcount unless it's freed or isolated.

> what does the folio_put() in me_huge_page() (when mapping = null) do?
> Is it dropping for something other than MF?

For isolated hugetlb folio, MF_HUGETLB_RETRY will be returned and code won't reach here.
Thanks.
.