Re: [PATCH] mm/page_alloc: use batch page clearing in kernel_init_pages()
From: Raghavendra K T
Date: Wed Apr 08 2026 - 07:19:28 EST
On 4/8/2026 4:14 PM, Salunke, Hrushikesh wrote:
[Some people who received this message don't often get email from hsalunke@xxxxxxx. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
On 08-04-2026 15:17, Vlastimil Babka (SUSE) wrote:
Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
On 4/8/26 11:24, Hrushikesh Salunke wrote:
When init_on_alloc is enabled, kernel_init_pages() clears every pageAny way to reuse the code added by [1], e.g. clear_user_highpages()?
one at a time, calling clear_page() per page. This is unnecessarily
slow for large contiguous allocations (mTHPs, HugeTLB) that dominate
real workloads.
On 64-bit (!HIGHMEM) systems, switch to clearing pages in batch via
clear_pages(), bypassing the per-page kmap_local_page()/kunmap_local()
overhead and allowing the arch clearing primitive to operate on the full
contiguous range in a single invocation. The batch size is the full
allocation when the preempt model is preemptible (preemption points are
implicit), or PROCESS_PAGES_NON_PREEMPT_BATCH otherwise, with
cond_resched() between batches to limit scheduling latency under
cooperative preemption.
The HIGHMEM path is kept as-is since those pages require kmap.
Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
Before: 0.445s
After: 0.166s (-62.7%, 2.68x faster)
Kernel time (sys) reduction per workload with init_on_alloc=1:
Workload Before After Change
Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
Signed-off-by: Hrushikesh Salunke <hsalunke@xxxxxxx>
---
base commit: 1a2fbbe3653f0ebb24af9b306a8a968287344a35
[1]
https://lore.kernel.org/linux-mm/20250917152418.4077386-1-ankur.a.arora@xxxxxxxxxx/
Thanks for the review. Sure, I will check if code reuse is possible.
Meanwhile I found another issue with the current patch.
kernel_init_pages() runs inside the allocator (post_alloc_hook and
__free_pages_prepare), so it inherits whatever context the caller is in.
Testing with CONFIG_DEBUG_ATOMIC_SLEEP=y and CONFIG_PROVE_LOCKING=y, I
hit this during exit_group() -> exit_mmap() -> __zap_vma_range, where a
page allocation happens while the PTE lock and RCU read lock are held,
making the cond_resched() in the clearing loop illegal:
[ 1997.353228] BUG: sleeping function called from invalid context at mm/page_alloc.c:1235
[ 1997.353433] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 19725, name: bash
[ 1997.353572] preempt_count: 1, expected: 0
[ 1997.353706] RCU nest depth: 1, expected: 0
[ 1997.353837] 3 locks held by bash/19725:
[ 1997.353839] #0: ff38cd415971e540 (&mm->mmap_lock){++++}-{4:4}, at: exit_mmap+0x6e/0x430
[ 1997.353850] #1: ffffffffb03d6f60 (rcu_read_lock){....}-{1:3}, at: __pte_offset_map+0x2c/0x220
[ 1997.353855] #2: ff38cd410deb4618 (ptlock_ptr(ptdesc)#2){+.+.}-{3:3}, at: pte_offset_map_lock+0x92/0x170
[ 1997.353868] Call Trace:
[ 1997.353870] <TASK>
[ 1997.353873] dump_stack_lvl+0x91/0xb0
[ 1997.353877] __might_resched+0x15f/0x290
[ 1997.353882] kernel_init_pages+0x4b/0xa0
[ 1997.353886] get_page_from_freelist+0x406/0x1e60
[ 1997.353895] __alloc_frozen_pages_noprof+0x1d8/0x1730
[ 1997.353912] alloc_pages_mpol+0xa4/0x190
[ 1997.353917] alloc_pages_noprof+0x59/0xd0
[ 1997.353919] get_free_pages_noprof+0x11/0x40
[ 1997.353921] __tlb_remove_folio_pages_size.isra.0+0x7f/0xe0
[ 1997.353923] __zap_vma_range+0x1bbd/0x1f40
[ 1997.353931] unmap_vmas+0xd9/0x1d0
[ 1997.353934] exit_mmap+0x10a/0x430
[ 1997.353943] __mmput+0x3d/0x130
[ 1997.353947] do_exit+0x2a7/0xae0
[ 1997.353951] do_group_exit+0x36/0xa0
[ 1997.353953] __x64_sys_exit_group+0x18/0x20
[ 1997.353959] do_syscall_64+0xe1/0x710
[ 1997.353990] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 1997.354003] </TASK>
This also means clear_contig_highpages() can't be directly reused here
since it has an unconditional might_sleep() + cond_resched(). I'll look
into this. Any suggestions on the right way to handle cond_resched()
in a context that may or may not be atomic?
Thanks,
Hrushikesh
mm/page_alloc.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b1c5430cad4e..178cbebadd50 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1224,8 +1224,23 @@ static void kernel_init_pages(struct page *page, int numpages)
/* s390's use of memset() could override KASAN redzones. */
kasan_disable_current();
- for (i = 0; i < numpages; i++)
- clear_highpage_kasan_tagged(page + i);
+
+ if (!IS_ENABLED(CONFIG_HIGHMEM)) {
+ void *addr = kasan_reset_tag(page_address(page));
+ unsigned int unit = preempt_model_preemptible() ?
+ numpages : PROCESS_PAGES_NON_PREEMPT_BATCH;
+ int count;
+
+ for (i = 0; i < numpages; i += count) {
+ cond_resched();
Just thinking,
Considering that for preemptible kernel/preempt_auto preempt_count()
knows about preemption points to decide where it can preempt,
and
for non_preemptible kernel and voluntary kernel it is safe to do
preemption at PROCESS_PAGES_NON_PREEMPT_BATCH granularity
do we need cond_resched() here ?
Let me know if I am missing something.
+ count = min_t(int, unit, numpages - i);
+ clear_pages(addr + (i << PAGE_SHIFT), count);
+ }
+ } else {
+ for (i = 0; i < numpages; i++)
+ clear_highpage_kasan_tagged(page + i);
+ }
+
kasan_enable_current();
}
Regards
- Raghu