Re: [PATCH v1 04/18] mm: track mapcount of large folios in single value

From: David Hildenbrand
Date: Thu Apr 18 2024 - 11:10:06 EST


On 18.04.24 16:50, Lance Yang wrote:
Hey David,

FWIW, just a nit below.

Hi!

Thanks, but that was done on purpose.

This way, we'll have a memory barrier (due to at least one atomic_inc_and_test()) between incrementing the folio refcount (happening before the rmap change) and incrementing the mapcount.

Is it required? Not 100% sure, refcount vs. mapcount checks are always a bit racy. But doing it this way let me sleep better at night ;)

[with no subpage mapcounts, we'd do the atomic_inc_and_test on the large mapcount and have the memory barrier there again; but that's stuff for the future]

Thanks!


diff --git a/mm/rmap.c b/mm/rmap.c
index 2608c40dffad..08bb6834cf72 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1143,7 +1143,6 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
int *nr_pmdmapped)
{
atomic_t *mapped = &folio->_nr_pages_mapped;
- const int orig_nr_pages = nr_pages;
int first, nr = 0;
__folio_rmap_sanity_checks(folio, page, nr_pages, level);
@@ -1155,6 +1154,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
break;
}
+ atomic_add(nr_pages, &folio->_large_mapcount);
do {
first = atomic_inc_and_test(&page->_mapcount);
if (first) {
@@ -1163,7 +1163,6 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
nr++;
}
} while (page++, --nr_pages > 0);
- atomic_add(orig_nr_pages, &folio->_large_mapcount);
break;
case RMAP_LEVEL_PMD:
first = atomic_inc_and_test(&folio->_entire_mapcount);

Thanks,
Lance


--
Cheers,

David / dhildenb