Re: [PATCH v3 1/4] x86/clear_page: extend clear_page*() for multi-page clearing

From: Ingo Molnar
Date: Mon Apr 14 2025 - 02:32:57 EST



* Ankur Arora <ankur.a.arora@xxxxxxxxxx> wrote:

> clear_page*() variants now take a page-aligned length parameter and
> clears the whole region.

Please read your changelogs and fix typos. ;-)

> +void clear_pages_orig(void *page, unsigned int length);
> +void clear_pages_rep(void *page, unsigned int length);
> +void clear_pages_erms(void *page, unsigned int length);

What unit is 'length' in? If it's bytes, why is this interface
artificially limiting itself to ~4GB? On x86-64 there's very little (if
any) performance difference between a 32-bit and a 64-bit length
iterations.

Even if we end up only exposing a 32-bit length API to the generic MM
layer, there's no reason to limit the x86-64 assembly code in such a
fashion.

> static inline void clear_page(void *page)
> {
> + unsigned int length = PAGE_SIZE;
> /*
> - * Clean up KMSAN metadata for the page being cleared. The assembly call
> + * Clean up KMSAN metadata for the pages being cleared. The assembly call
> * below clobbers @page, so we perform unpoisoning before it.

> */
> - kmsan_unpoison_memory(page, PAGE_SIZE);
> - alternative_call_2(clear_page_orig,
> - clear_page_rep, X86_FEATURE_REP_GOOD,
> - clear_page_erms, X86_FEATURE_ERMS,
> + kmsan_unpoison_memory(page, length);
> +
> + alternative_call_2(clear_pages_orig,
> + clear_pages_rep, X86_FEATURE_REP_GOOD,
> + clear_pages_erms, X86_FEATURE_ERMS,
> "=D" (page),
> - "D" (page),
> + ASM_INPUT("D" (page), "S" (length)),
> "cc", "memory", "rax", "rcx");
> }
>
> diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S
> index a508e4a8c66a..bce516263b69 100644
> --- a/arch/x86/lib/clear_page_64.S
> +++ b/arch/x86/lib/clear_page_64.S
> @@ -13,20 +13,35 @@
> */
>
> /*
> - * Zero a page.
> - * %rdi - page
> + * Zero kernel page aligned region.
> + *
> + * Input:
> + * %rdi - destination
> + * %esi - length
> + *
> + * Clobbers: %rax, %rcx
> */
> -SYM_TYPED_FUNC_START(clear_page_rep)
> - movl $4096/8,%ecx
> +SYM_TYPED_FUNC_START(clear_pages_rep)
> + movl %esi, %ecx
> xorl %eax,%eax
> + shrl $3,%ecx
> rep stosq
> RET
> -SYM_FUNC_END(clear_page_rep)
> -EXPORT_SYMBOL_GPL(clear_page_rep)
> +SYM_FUNC_END(clear_pages_rep)
> +EXPORT_SYMBOL_GPL(clear_pages_rep)
>
> -SYM_TYPED_FUNC_START(clear_page_orig)
> +/*
> + * Original page zeroing loop.
> + * Input:
> + * %rdi - destination
> + * %esi - length
> + *
> + * Clobbers: %rax, %rcx, %rflags
> + */
> +SYM_TYPED_FUNC_START(clear_pages_orig)
> + movl %esi, %ecx
> xorl %eax,%eax
> - movl $4096/64,%ecx
> + shrl $6,%ecx

So if the natural input parameter is RCX, why is this function using
RSI as the input 'length' parameter? Causes unnecessary register
shuffling.

> +/*
> + * Zero kernel page aligned region.
> + *
> + * Input:
> + * %rdi - destination
> + * %esi - length
> + *
> + * Clobbers: %rax, %rcx
> + */
> +SYM_TYPED_FUNC_START(clear_pages_erms)
> + movl %esi, %ecx
> xorl %eax,%eax
> rep stosb
> RET

Same observation: unnecessary register shuffling.

Also, please rename this (now-) terribly named interface:

> +void clear_pages_orig(void *page, unsigned int length);
> +void clear_pages_rep(void *page, unsigned int length);
> +void clear_pages_erms(void *page, unsigned int length);

Because the 'pages' is now a bit misleading, and why is the starting
address called a 'page'?

So a more sensible namespace would be to follow memset nomenclature:

void memzero_page_aligned_*(void *addr, unsigned long len);

... and note the intentional abbreviation to 'len'.

Also, since most of these changes are to x86 architecture code, this is
a new interface only used by x86, and the MM glue is minimal, I'd like
to merge this series via the x86 tree, if the glue gets acks from MM
folks.

Thanks,

Ingo