Re: [RFC PATCH] mm: huge_memory: add folio_mark_accessed() when zapping file THP
From: Barry Song
Date: Thu Apr 10 2025 - 17:57:00 EST
On Fri, Apr 11, 2025 at 3:13 AM Zi Yan <ziy@xxxxxxxxxx> wrote:
>
> On 10 Apr 2025, at 6:29, Barry Song wrote:
>
> > On Thu, Apr 10, 2025 at 9:05 PM Baolin Wang
> > <baolin.wang@xxxxxxxxxxxxxxxxx> wrote:
> >>
> >>
> >>
> >> On 2025/4/10 16:14, Barry Song wrote:
> >>> On Wed, Apr 9, 2025 at 1:16 AM Baolin Wang
> >>> <baolin.wang@xxxxxxxxxxxxxxxxx> wrote:
> >>>>
> >>>> When investigating performance issues during file folio unmap, I noticed some
> >>>> behavioral differences in handling non-PMD-sized folios and PMD-sized folios.
> >>>> For non-PMD-sized file folios, it will call folio_mark_accessed() to mark the
> >>>> folio as having seen activity, but this is not done for PMD-sized folios.
> >>>>
> >>>> This might not cause obvious issues, but a potential problem could be that,
> >>>> it might lead to more frequent refaults of PMD-sized file folios under memory
> >>>> pressure. Therefore, I am unsure whether the folio_mark_accessed() should be
> >>>> added for PMD-sized file folios?
> >>>>
> >>>> Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
> >>>> ---
> >>>> mm/huge_memory.c | 4 ++++
> >>>> 1 file changed, 4 insertions(+)
> >>>>
> >>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >>>> index 6ac6d468af0d..b3ade7ac5bbf 100644
> >>>> --- a/mm/huge_memory.c
> >>>> +++ b/mm/huge_memory.c
> >>>> @@ -2262,6 +2262,10 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >>>> zap_deposited_table(tlb->mm, pmd);
> >>>> add_mm_counter(tlb->mm, mm_counter_file(folio),
> >>>> -HPAGE_PMD_NR);
> >>>> +
> >>>> + if (flush_needed && pmd_young(orig_pmd) &&
> >>>> + likely(vma_has_recency(vma)))
> >>>> + folio_mark_accessed(folio);
> >>>
> >>> Acked-by: Barry Song <baohua@xxxxxxxxxx>
> >>
> >> Thanks.
> >>
> >>> I also came across an interesting observation: on a memory-limited system,
> >>> demoting unmapped file folios in the LRU—specifically when their mapcount
> >>> drops from 1 to 0—can actually improve performance.
> >>
> >> These file folios are used only once? Can folio_set_dropbehind() be used
> >> to optimize it, which can avoid the LRU activity movement in
> >> folio_mark_accessed()?
> >
> > For instance, when a process, such as a game, just exits, it can be expected
> > that it won't be used again in the near future. As a result, demoting
> > its previously
> > unmapped file pages can improve performance.
>
> Is it possible to mark the dying VMAs either VM_SEQ_READ or VM_RAND_READ
> so that folio_mark_accessed() will be skipped? Or a new vm_flag?
> Will it work?
Actually took a more aggressive approach and observed good performance
improvements on phones. After zap_pte_range() called remove_rmap(),
the following logic was added:
if (file_folio && !folio_mapped())
deactivate_file_folio();
This helps file folios from exiting processes get reclaimed more quickly
during the MGLRU's min generation scan while the folios are probably
in max gen.
I'm not entirely sure if this is universally applicable or worth submitting as
a patch.
>
> >
> > Of course, for file folios mapped by multiple processes, such as
> > common .so files,
> > it's a different story. Typically, their mapcounts are always high.
>
> Text VMAs should not be marked.
>
> >
> >>
> >>> If others have observed the same behavior, we might not need to mark them
> >>> as accessed in that scenario.
> >>>
> >>>> }
> >>>>
> >>>> spin_unlock(ptl);
> >>>> --
> >>>> 2.43.5
> >>>>
> >>>
> >
Thanks
Barry