[PATCH] mm: Refactor vma_map_pages to use vm_insert_pages
From: Justin Green
Date: Wed Jan 28 2026 - 17:57:33 EST
vma_map_pages currently calls vm_insert_page on each individual page in
the mapping, which creates significant overhead because we are
repeatedly spinlocking. Instead, we should batch insert pages using
vm_insert_pages, which amortizes the cost of the spinlock.
Tested through watching hardware accelerated video on a MTK ChromeOS
device. This particular path maps both a V4L2 buffer and a GEM allocated
buffer into userspace and converts the contents from one pixel format to
another. Both vb2_mmap() and mtk_gem_object_mmap() exercise this
pathway.
Signed-off-by: Justin Green <greenjustin@xxxxxxxxxxxx>
---
mm/memory.c | 10 +---------
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..7ae6ac42e7d8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2499,7 +2499,6 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
{
unsigned long count = vma_pages(vma);
unsigned long uaddr = vma->vm_start;
- int ret, i;
/* Fail if the user requested offset is beyond the end of the object */
if (offset >= num)
@@ -2509,14 +2508,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
if (count > num - offset)
return -ENXIO;
- for (i = 0; i < count; i++) {
- ret = vm_insert_page(vma, uaddr, pages[offset + i]);
- if (ret < 0)
- return ret;
- uaddr += PAGE_SIZE;
- }
-
- return 0;
+ return vm_insert_pages(vma, uaddr, pages + offset, &count);
}
/**
--
2.53.0.rc1.217.geba53bf80e-goog