[PATCH v2 2/2] alloc_tag: outline and export free_reserved_page()

From: Suren Baghdasaryan
Date: Wed Jul 17 2024 - 14:13:04 EST


Outline and export free_reserved_page() because modules use it and it
in turn uses page_ext_{get|put} which should not be exported. The same
result could be obtained by outlining {get|put}_page_tag_ref() but that
would have higher performance impact as these functions are used in
more performance critical paths.

Fixes: dcfe378c81f7 ("lib: introduce support for page allocation tagging")
Reported-by: kernel test robot <lkp@xxxxxxxxx>
Closes: https://lore.kernel.org/oe-kbuild-all/202407080044.DWMC9N9I-lkp@xxxxxxxxx/
Suggested-by: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Suggested-by: Vlastimil Babka <vbabka@xxxxxxx>
Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
---
Changes since v1 [1]
- Outlined and exported free_reserved_page() in place of {get|put}_page_tag_ref,
per Vlastimil Babka

[1] https://lore.kernel.org/all/20240717011631.2150066-2-surenb@xxxxxxxxxx/

include/linux/mm.h | 16 +---------------
mm/page_alloc.c | 17 +++++++++++++++++
2 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index eb7c96d24ac0..b58bad248eef 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3177,21 +3177,7 @@ extern void reserve_bootmem_region(phys_addr_t start,
phys_addr_t end, int nid);

/* Free the reserved page into the buddy system, so it gets managed. */
-static inline void free_reserved_page(struct page *page)
-{
- if (mem_alloc_profiling_enabled()) {
- union codetag_ref *ref = get_page_tag_ref(page);
-
- if (ref) {
- set_codetag_empty(ref);
- put_page_tag_ref(ref);
- }
- }
- ClearPageReserved(page);
- init_page_count(page);
- __free_page(page);
- adjust_managed_page_count(page, 1);
-}
+void free_reserved_page(struct page *page);
#define free_highmem_page(page) free_reserved_page(page)

static inline void mark_page_reserved(struct page *page)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9ecf99190ea2..7d2fa9f5e750 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5805,6 +5805,23 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char
return pages;
}

+void free_reserved_page(struct page *page)
+{
+ if (mem_alloc_profiling_enabled()) {
+ union codetag_ref *ref = get_page_tag_ref(page);
+
+ if (ref) {
+ set_codetag_empty(ref);
+ put_page_tag_ref(ref);
+ }
+ }
+ ClearPageReserved(page);
+ init_page_count(page);
+ __free_page(page);
+ adjust_managed_page_count(page, 1);
+}
+EXPORT_SYMBOL(free_reserved_page);
+
static int page_alloc_cpu_dead(unsigned int cpu)
{
struct zone *zone;
--
2.45.2.993.g49e7a77208-goog