[PATCH] mm/vmalloc.c: add vb into appropriate vbq->free

From: Baoquan He
Date: Fri May 31 2024 - 11:44:57 EST


The current vbq is organized into per-cpu data structure, including a xa
and list. However, its adding into vba->free list is not handled
correctly. The new vmap_block allocation could be done in one cpu, while
it's actually belong into anohter cpu's percpu vbq. Then the
list_for_each_entry_rcu() on the vbq->free and its deletion could cause
list breakage.

This fix the wrong vb adding to make it be added into expected
vba->free.

Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
---
mm/vmalloc.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b921baf0ef8a..47659b41259a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2547,6 +2547,14 @@ addr_to_vb_xa(unsigned long addr)
return &per_cpu(vmap_block_queue, index).vmap_blocks;
}

+static struct vmap_block_queue *
+addr_to_vbq(unsigned long addr)
+{
+ int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
+
+ return &per_cpu(vmap_block_queue, index);
+}
+
/*
* We should probably have a fallback mechanism to allocate virtual memory
* out of partially filled vmap blocks. However vmap block sizing should be
@@ -2626,7 +2634,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
return ERR_PTR(err);
}

- vbq = raw_cpu_ptr(&vmap_block_queue);
+ vbq = addr_to_vbq(va->va_start);
spin_lock(&vbq->lock);
list_add_tail_rcu(&vb->free_list, &vbq->free);
spin_unlock(&vbq->lock);
--
2.41.0