[PATCH] mm, thp: use head_mapcount when we know we have a head page

From: Vlastimil Babka
Date: Thu Aug 06 2020 - 07:33:58 EST


Patch "mm, dump_page: do not crash with bad compound_mapcount()" has introduced
head_mapcount() to split out the part of compound_mapcount() where we already
know/assume we have a head page. We can use the new function in more places
where we already have a head page, to avoid the overhead of compound_head()
and (with DEBUG_VM) a debug check. This patch does that. There are few more
applicable places, but behind DEBUG_VM so performance is not important, and the
extra debug check in compound_mapcount() could be useful instead.

The bloat-o-meter difference without DEBUG_VM is the following:

add/remove: 0/0 grow/shrink: 1/4 up/down: 32/-56 (-24)
Function old new delta
__split_huge_pmd 2867 2899 +32
shrink_page_list 3860 3847 -13
reuse_swap_page 762 748 -14
page_trans_huge_mapcount 153 139 -14
total_mapcount 187 172 -15
Total: Before=8687306, After=8687282, chg -0.00%

This just shows that compiler wasn't able to prove we have a head page by
itself. In __split_huge_pmd() the eliminated check probably led to different
optimization decisions thus code size increased.

Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
---
mm/huge_memory.c | 6 +++---
mm/swapfile.c | 2 +-
mm/vmscan.c | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 90733cefa528..5927874b7894 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2125,7 +2125,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
* Set PG_double_map before dropping compound_mapcount to avoid
* false-negative page_mapped().
*/
- if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
+ if (head_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
for (i = 0; i < HPAGE_PMD_NR; i++)
atomic_inc(&page[i]._mapcount);
}
@@ -2467,7 +2467,7 @@ int total_mapcount(struct page *page)
if (likely(!PageCompound(page)))
return atomic_read(&page->_mapcount) + 1;

- compound = compound_mapcount(page);
+ compound = head_mapcount(page);
if (PageHuge(page))
return compound;
ret = compound;
@@ -2531,7 +2531,7 @@ int page_trans_huge_mapcount(struct page *page, int *total_mapcount)
ret -= 1;
_total_mapcount -= HPAGE_PMD_NR;
}
- mapcount = compound_mapcount(page);
+ mapcount = head_mapcount(page);
ret += mapcount;
_total_mapcount += mapcount;
if (total_mapcount)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 9ee4211835c6..c5e722de38b8 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1673,7 +1673,7 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
map_swapcount -= 1;
_total_mapcount -= HPAGE_PMD_NR;
}
- mapcount = compound_mapcount(page);
+ mapcount = head_mapcount(page);
map_swapcount += mapcount;
_total_mapcount += mapcount;
if (total_mapcount)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a086c104a9a6..72218cdfd902 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1248,7 +1248,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
* away. Chances are some or all of the
* tail pages can be freed without IO.
*/
- if (!compound_mapcount(page) &&
+ if (!head_mapcount(page) &&
split_huge_page_to_list(page,
page_list))
goto activate_locked;
--
2.28.0