Re: [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()
From: Ryan Roberts
Date: Mon Apr 08 2024 - 08:08:26 EST
[...]
>
> [...]
>
>> +
>> +/**
>> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries
>> + * @start_ptep: Page table pointer for the first entry.
>> + * @max_nr: The maximum number of table entries to consider.
>> + * @entry: Swap entry recovered from the first table entry.
>> + *
>> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs
>> + * containing swap entries all with consecutive offsets and targeting the same
>> + * swap type.
>> + *
>
> Likely you should document that any swp pte bits are ignored? ()
Now that I understand what swp pte bits are, I think the simplest thing is to
just make this function always consider the pte bits by using pte_same() as you
suggest below? I don't think there is ever a case for ignoring the swp pte bits?
And then I don't need to do anything special for uffd-wp either (below you
suggested not doing batching when the VMA has uffd enabled).
Any concerns?
>
>> + * max_nr must be at least one and must be limited by the caller so scanning
>> + * cannot exceed a single page table.
>> + *
>> + * Return: the number of table entries in the batch.
>> + */
>> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr,
>> + swp_entry_t entry)
>> +{
>> + const pte_t *end_ptep = start_ptep + max_nr;
>> + unsigned long expected_offset = swp_offset(entry) + 1;
>> + unsigned int expected_type = swp_type(entry);
>> + pte_t *ptep = start_ptep + 1;
>> +
>> + VM_WARN_ON(max_nr < 1);
>> + VM_WARN_ON(non_swap_entry(entry));
>> +
>> + while (ptep < end_ptep) {
>> + pte_t pte = ptep_get(ptep);
>> +
>> + if (pte_none(pte) || pte_present(pte))
>> + break;
>> +
>> + entry = pte_to_swp_entry(pte);
>> +
>> + if (non_swap_entry(entry) ||
>> + swp_type(entry) != expected_type ||
>> + swp_offset(entry) != expected_offset)
>> + break;
>> +
>> + expected_offset++;
>> + ptep++;
>> + }
>> +
>> + return ptep - start_ptep;
>> +}
>
> Looks very clean :)
>
> I was wondering whether we could similarly construct the expected swp PTE and
> only check pte_same.
>
> expected_pte = __swp_entry_to_pte(__swp_entry(expected_type, expected_offset));
So planning to do this.
>
> ... or have a variant to increase only the swp offset for an existing pte. But
> non-trivial due to the arch-dependent format.
not this - I agree this will be difficult due to per-arch changes. I'd rather
just do the generic version and leave the compiler to do the best it can to
simplify and optimize.
>
> But then, we'd fail on mismatch of other swp pte bits.
>
>
> On swapin, when reusing this function (likely!), we'll might to make sure that
> the PTE bits match as well.
>
> See below regarding uffd-wp.
>
>
>> #endif /* CONFIG_MMU */
>> void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index 1f77a51baaac..070bedb4996e 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>> long addr,
>> struct folio *folio;
>> int nr_swap = 0;
>> unsigned long next;
>> + int nr, max_nr;
>> next = pmd_addr_end(addr, end);
>> if (pmd_trans_huge(*pmd))
>> @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>> long addr,
>> return 0;
>> flush_tlb_batched_pending(mm);
>> arch_enter_lazy_mmu_mode();
>> - for (; addr != end; pte++, addr += PAGE_SIZE) {
>> + for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) {
>> + nr = 1;
>> ptent = ptep_get(pte);
>> if (pte_none(ptent))
>> @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>> long addr,
>> entry = pte_to_swp_entry(ptent);
>> if (!non_swap_entry(entry)) {
>> - nr_swap--;
>> - free_swap_and_cache(entry);
>> - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>> + max_nr = (end - addr) / PAGE_SIZE;
>> + nr = swap_pte_batch(pte, max_nr, entry);
>> + nr_swap -= nr;
>> + free_swap_and_cache_nr(entry, nr);
>> + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
>> } else if (is_hwpoison_entry(entry) ||
>> is_poisoned_swp_entry(entry)) {
>> pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 7dc6c3d9fa83..ef2968894718 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather
>> *tlb,
>> folio_remove_rmap_pte(folio, page, vma);
>> folio_put(folio);
>> } else if (!non_swap_entry(entry)) {
>> - /* Genuine swap entry, hence a private anon page */
>> + max_nr = (end - addr) / PAGE_SIZE;
>> + nr = swap_pte_batch(pte, max_nr, entry);
>> + /* Genuine swap entries, hence a private anon pages */
>> if (!should_zap_cows(details))
>> continue;
>> - rss[MM_SWAPENTS]--;
>> - if (unlikely(!free_swap_and_cache(entry)))
>> - print_bad_pte(vma, addr, ptent, NULL);
>> + rss[MM_SWAPENTS] -= nr;
>> + free_swap_and_cache_nr(entry, nr);
>> } else if (is_migration_entry(entry)) {
>> folio = pfn_swap_entry_folio(entry);
>> if (!should_zap_folio(details, folio))
>> @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
>> pr_alert("unrecognized swap entry 0x%lx\n", entry.val);
>> WARN_ON_ONCE(1);
>> }
>> - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>> - zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent);
>> + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
>
> For zap_install_uffd_wp_if_needed(), the uffd-wp bit has to match.
>
> zap_install_uffd_wp_if_needed() will use the uffd-wp information in
> ptent->pteval to make a decision whether to place PTE_MARKER_UFFD_WP markers.
>
> On mixture, you either lose some or place too many markers.
>
> A simple workaround would be to disable any such batching if the VMA does have
> uffd-wp enabled.
Rather than this, I'll just consider all the swp pte bits when batching.
>
>> + zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent);
>> } while (pte += nr, addr += PAGE_SIZE * nr, addr != end);
[...]