Re: [PATCH v2 2/4] mm/huge_memory: replace can_split_folio() with direct refcount calculation
From: David Hildenbrand (Red Hat)
Date: Tue Nov 25 2025 - 03:54:08 EST
Like:
if (folio_test_anon(folio)) {
/* One reference per page from the swapcache. */
ref_count += folio_test_swapcache(folio) << order;
} else {
/* One reference per page from shmem in the swapcache. */
ref_count += folio_test_swapcache(folio) << order;
/* One reference per page from the pagecache. */
ref_count += !!folio->mapping << order;
/* One reference from PG_private. */
ref_count += folio_test_private(folio);
}
or simplified into
if (!folio_test_anon(folio)) {
/* One reference per page from the pagecache. */
ref_count += !!folio->mapping << order;
/* One reference from PG_private. */
ref_count += folio_test_private(folio);
}
/* One reference per page from the swapcache (anon or shmem). */
ref_count += folio_test_swapcache(folio) << order;
?
That is incorrect I think due to swapcache being able to give false positives (PG_owner_priv_1).
Got it. So it should be:
if (folio_test_anon(folio)) {
/* One reference per page from the swapcache. */
ref_count += folio_test_swapcache(folio) << order;
} else {
/* One reference per page from shmem in the swapcache. */
ref_count += (folio_test_swapbacked (folio) &&
folio_test_swapcache(folio)) << order;
/* One reference per page from the pagecache. */
ref_count += !!folio->mapping << order;
/* One reference from PG_private. */
ref_count += folio_test_private(folio);
}
Interestingly, I think we would then also take proper care of anon folios in the
swapcache that are not anon yet. See __read_swap_cache_async().
I wonder if we can clean that up a bit, to highlight that PG_private etc
do not apply.
if (folio_test_anon(folio)) {
/* One reference per page from the swapcache. */
ref_count += folio_test_swapcache(folio) << order;
} else if (folio_test_swapbacked (folio) && folio_test_swapcache(folio)) {
/* to-be-anon or shmem folio in the swapcache (!folio->mapping) */
ref_count += 1ul << order;
VM_WAN_ON_ONCE(folio->mapping);
} else {
/* One reference per page from the pagecache. */
ref_count += !!folio->mapping << order;
/* One reference from PG_private. */
ref_count += folio_test_private(folio);
}
Or maybe simply:
if (folio_test_swapbacked (folio) && folio_test_swapcache(folio)) {
/*
* (to-be) anon or shmem (!folio->mapping) folio in the swapcache:
* One reference per page from the swapcache.
*/
ref_count += 1 << order;
VM_WAN_ON_ONCE(!folio_test_anon(folio) && folio->mapping);
} else if (!folio_test_anon(folio)) {
/* One reference per page from the pagecache. */
ref_count += !!folio->mapping << order;
/* One reference from PG_private. */
ref_count += folio_test_private(folio);
}
I wonder if we should have folio_test_shmem_in_swapcache() instead.
Interestingly, thinking about it, I think it would also match to-be anon folios
and anon folios.
folio_in_swapcache() maybe ?
BTW, this page flag reuse is really confusing.
Yes ...
I see PG_checked is
PG_owner_priv_1 too and __folio_migrate_mapping() uses folio_test_swapcache()
to decide the number of i_pages entries. Wouldn’t that cause any issue?
Maybe at that point all false positives were ruled out?
It is horrible TBH.
--
Cheers
David