Re: [PATCH v1 10/15] mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages

From: David Hildenbrand
Date: Sat Mar 12 2022 - 03:27:01 EST


On 11.03.22 22:02, Matthew Wilcox wrote:
> On Fri, Mar 11, 2022 at 07:46:39PM +0100, David Hildenbrand wrote:
>> I'm currently testing with the following. My tests so far with a swapfile on
>> all different kinds of weird filesystems (excluding networking fs, though)
>> revealed no surprises so far:
>
> I like this a lot better than reusing PG_swap. Thanks!
>
> I'm somewhat reluctant to introduce a new flag that can be set on tail
> pages. Do we lose much if it's always set only on the head page?
After spending one month on getting THP to work without PF_ANY, I can
say with confidence that the whole thing won't fly with THP when not
tracking it on the minimum-mapping granularity. For a PTE-mapped THP,
that's on the subpage level.

The next patch in the series documents some details. Intuitively, if we
could replace the pageflag by a PTE/PMD bit, we'd get roughly the same
result.

>
>> +++ b/include/linux/page-flags.h
>> @@ -142,6 +142,60 @@ enum pageflags {
>>
>> PG_readahead = PG_reclaim,
>>
>> + /*
>> + * Depending on the way an anonymous folio can be mapped into a page
>> + * table (e.g., single PMD/PUD/CONT of the head page vs. PTE-mapped
>> + * THP), PG_anon_exclusive may be set only for the head page or for
>> + * subpages of an anonymous folio.
>> + *
>> + * PG_anon_exclusive is *usually* only expressive in combination with a
>> + * page table entry. Depending on the page table entry type it might
>> + * store the following information:
>> + *
>> + * Is what's mapped via this page table entry exclusive to the
>> + * single process and can be mapped writable without further
>> + * checks? If not, it might be shared and we might have to COW.
>> + *
>> + * For now, we only expect PTE-mapped THPs to make use of
>> + * PG_anon_exclusive in subpages. For other anonymous compound
>> + * folios (i.e., hugetlb), only the head page is logically mapped and
>> + * holds this information.
>> + *
>> + * For example, an exclusive, PMD-mapped THP only has PG_anon_exclusive
>> + * set on the head page. When replacing the PMD by a page table full
>> + * of PTEs, PG_anon_exclusive, if set on the head page, will be set on
>> + * all tail pages accordingly. Note that converting from a PTE-mapping
>> + * to a PMD mapping using the same compound page is currently not
>> + * possible and consequently doesn't require care.
>> + *
>> + * If GUP wants to take a reliable pin (FOLL_PIN) on an anonymous page,
>> + * it should only pin if the relevant PG_anon_bit is set. In that case,
>> + * the pin will be fully reliable and stay consistent with the pages
>> + * mapped into the page table, as the bit cannot get cleared (e.g., by
>> + * fork(), KSM) while the page is pinned. For anonymous pages that
>> + * are mapped R/W, PG_anon_exclusive can be assumed to always be set
>> + * because such pages cannot possibly be shared.
>> + *
>> + * The page table lock protecting the page table entry is the primary
>> + * synchronization mechanism for PG_anon_exclusive; GUP-fast that does
>> + * not take the PT lock needs special care when trying to clear the
>> + * flag.
>> + *
>> + * Page table entry types and PG_anon_exclusive:
>> + * * Present: PG_anon_exclusive applies.
>> + * * Swap: the information is lost. PG_anon_exclusive was cleared.
>> + * * Migration: the entry holds this information instead.
>> + * PG_anon_exclusive was cleared.
>> + * * Device private: PG_anon_exclusive applies.
>> + * * Device exclusive: PG_anon_exclusive applies.
>> + * * HW Poison: PG_anon_exclusive is stale and not changed.
>> + *
>> + * If the page may be pinned (FOLL_PIN), clearing PG_anon_exclusive is
>> + * not allowed and the flag will stick around until the page is freed
>> + * and folio->mapping is cleared.
>> + */
>
> ... I also don't think this is the right place for this comment. Not
> sure where it should go.

I went for "rather have some documentation at a sub-optimal place then
no documentation at all". I'm thinking about writing a proper
documentation once everything is in place, and moving some details from
there into that document then.

>
>> +static __always_inline void SetPageAnonExclusive(struct page *page)
>> +{
>> + VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
>
> hm. seems to me like we should have a PageAnonNotKsm which just
> does
> return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
> PAGE_MAPPING_ANON;
> because that's "a bit" inefficient. OK, that's just a VM_BUG_ON,
> but we have other users in real code:
>
> mm/migrate.c: if (PageAnon(page) && !PageKsm(page))
> mm/page_idle.c: need_lock = !PageAnon(page) || PageKsm(page);
> mm/rmap.c: if (!is_locked && (!PageAnon(page) || PageKsm(page))) {
>

I'm wondering if the compiler won't be able to optimize that. Having
that said, I can look into adding that outside of this series.

Thanks!

--
Thanks,

David / dhildenb