Re: [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()

From: Vlastimil Babka (SUSE)

Date: Tue Apr 21 2026 - 05:29:30 EST


On 4/21/26 06:24, Hrushikesh Salunke wrote:
> When init_on_alloc is enabled, kernel_init_pages() clears every page
> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
> kmap_local_page()/kunmap_local() overhead and prevents the architecture
> clearing primitive from operating on contiguous ranges.
>
> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
> clearing helper that calls clear_pages() for the full contiguous range
> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
> a single invocation of the arch clearing primitive across the entire
> allocation. The HIGHMEM path falls back to per-page clearing since
> those pages require kmap.
>
> Use it in kernel_init_pages() to replace the per-page loop.
>
> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>
> Before: 0.445s
> After: 0.166s (-62.7%, 2.68x faster)
>
> Kernel time (sys) reduction per workload with init_on_alloc=1:
>
> Workload Before After Change
> Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
> Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
> Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
> Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
>
> Signed-off-by: Hrushikesh Salunke <hsalunke@xxxxxxx>
> ---
> base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7
>
> v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@xxxxxxx/
>
> Changes since v1:
> - Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
> kernel_init_pages() runs inside the page allocator and can be
> called from atomic context, making cond_resched() unsafe. The
> original code never had a cond_resched() here, and the
> performance gain comes from batching, not rescheduling.
>
> - Moved the !HIGHMEM/HIGHMEM branching into a new
> clear_highpages_kasan_tagged() helper in highmem.h, per David's
> suggestion.
>
> include/linux/highmem.h | 12 ++++++++++++
> mm/page_alloc.c | 5 +----
> 2 files changed, 13 insertions(+), 4 deletions(-)

Acked-by: Vlastimil Babka (SUSE) <vbabka@xxxxxxxxxx>

>
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index af03db851a1d..ad0f42d06ce6 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
> kunmap_local(kaddr);
> }
>
> +static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
> +{
> + if (!IS_ENABLED(CONFIG_HIGHMEM)) {
> + clear_pages(kasan_reset_tag(page_address(page)), numpages);
> + } else {
> + int i;
> +
> + for (i = 0; i < numpages; i++)
> + clear_highpage_kasan_tagged(page + i);
> + }
> +}
> +
> #ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGES
>
> /* Return false to let people know we did not initialize the pages */
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index b1c5430cad4e..1aaf7f839ff4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1220,12 +1220,9 @@ static inline bool should_skip_kasan_poison(struct page *page)
>
> static void kernel_init_pages(struct page *page, int numpages)
> {
> - int i;
> -
> /* s390's use of memset() could override KASAN redzones. */
> kasan_disable_current();
> - for (i = 0; i < numpages; i++)
> - clear_highpage_kasan_tagged(page + i);
> + clear_highpages_kasan_tagged(page, numpages);
> kasan_enable_current();
> }
>