Re: [RFC PATCH 7/8] mm/vmalloc: Coalesce same page_shift mappings in vmap to avoid pgtable zigzag

From: Dev Jain

Date: Wed Apr 08 2026 - 07:40:50 EST




On 08/04/26 8:21 am, Barry Song (Xiaomi) wrote:
> For vmap(), detect pages with the same page_shift and map them in
> batches, avoiding the pgtable zigzag caused by per-page mapping.
>
> Signed-off-by: Barry Song (Xiaomi) <baohua@xxxxxxxxxx>
> ---

In patch 4, you eliminate the pagetable rewalk, and in patch 5,
you re-introduce it, then in this patch you eliminate it again.
So please just squash this into #5.

> mm/vmalloc.c | 24 ++++++++++++++++++++----
> 1 file changed, 20 insertions(+), 4 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6643ec0288cd..3c3b7217693a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3551,6 +3551,8 @@ static int vmap_contig_pages_range(unsigned long addr, unsigned long end,
> pgprot_t prot, struct page **pages)
> {
> unsigned int count = (end - addr) >> PAGE_SHIFT;
> + unsigned int prev_shift = 0, idx = 0;
> + unsigned long map_addr = addr;
> int err;
>
> err = kmsan_vmap_pages_range_noflush(addr, end, prot, pages,
> @@ -3562,15 +3564,29 @@ static int vmap_contig_pages_range(unsigned long addr, unsigned long end,
> unsigned int shift = PAGE_SHIFT +
> get_vmap_batch_order(pages, count - i, i);
>
> - err = vmap_range_noflush(addr, addr + (1UL << shift),
> - page_to_phys(pages[i]), prot, shift);
> - if (err)
> - goto out;
> + if (!i)
> + prev_shift = shift;
> +
> + if (shift != prev_shift) {
> + err = vmap_small_pages_range_noflush(map_addr, addr,
> + prot, pages + idx,
> + min(prev_shift, PMD_SHIFT));
> + if (err)
> + goto out;
> + prev_shift = shift;
> + map_addr = addr;
> + idx = i;
> + }
>
> addr += 1UL << shift;
> i += 1U << (shift - PAGE_SHIFT);
> }
>
> + /* Remaining */
> + if (map_addr < end)
> + err = vmap_small_pages_range_noflush(map_addr, end,
> + prot, pages + idx, min(prev_shift, PMD_SHIFT));
> +
> out:
> flush_cache_vmap(addr, end);
> return err;