Re: [PATCH 01/13] mm: Update ptep_get_lockless()s comment
From: Peter Zijlstra
Date: Mon Oct 31 2022 - 05:46:46 EST
On Sun, Oct 30, 2022 at 06:47:23PM -0700, Linus Torvalds wrote:
> diff --git a/mm/memory.c b/mm/memory.c
> index ba1d08a908a4..c893f5ffc5a8 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1451,9 +1451,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
> if (pte_young(ptent) &&
> likely(!(vma->vm_flags & VM_SEQ_READ)))
> mark_page_accessed(page);
> + }
> + page_zap_pte_rmap(page);
> munlock_vma_page(page, vma, false);
> rss[mm_counter(page)]--;
> if (unlikely(page_mapcount(page) < 0))
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 69de6c833d5c..28b51a31ebb0 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1413,47 +1413,26 @@ static void page_remove_anon_compound_rmap(struct page *page)
> }
>
> /**
> + * page_zap_pte_rmap - take down a pte mapping from a page
> * @page: page to remove mapping from
> *
> + * This is the simplified form of page_remove_rmap(), that only
> + * deals with last-level pages, so 'compound' is always false,
> + * and the caller does 'munlock_vma_page(page, vma, compound)'
> + * separately.
> *
> + * This allows for a much simpler calling convention and code.
> *
> * The caller holds the pte lock.
> */
> +void page_zap_pte_rmap(struct page *page)
> {
> if (!atomic_add_negative(-1, &page->_mapcount))
> return;
>
> lock_page_memcg(page);
> + __dec_lruvec_page_state(page,
> + PageAnon(page) ? NR_ANON_MAPPED : NR_FILE_MAPPED);
> unlock_page_memcg(page);
> }
So we *could* use atomic_add_return() and include the print_bad_pte()
thing in this function -- however that turns the whole thing into a mess
again :/