[PATCH 1/2] mm/vmalloc: Do not adjust the search size for alignment overhead

From: Uladzislau Rezki (Sony)
Date: Mon Oct 04 2021 - 10:28:47 EST


We used to include an alignment overhead into a search length, in
that case we guarantee that a found area will definitely fit after
applying a specific alignment that user specifies. From the other
hand we do not guarantee that an area has the lowest address if
an alignment is >= PAGE_SIZE.

It means that, when a user specifies a special alignment together
with a range that corresponds to an exact requested size then an
allocation will fail. This is what happens to KASAN, it wants the
free block that exactly matches a specified range during onlining
memory banks:

[root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory82/state
[root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory83/state
[root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory85/state
[root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory84/state
[ 223.858115] vmap allocation for size 16777216 failed: use vmalloc=<size> to increase size
[ 223.859415] bash: vmalloc: allocation failure: 16777216 bytes, mode:0x6000c0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=0
[ 223.860992] CPU: 4 PID: 1644 Comm: bash Kdump: loaded Not tainted 4.18.0-339.el8.x86_64+debug #1
[ 223.862149] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 223.863580] Call Trace:
[ 223.863946] dump_stack+0x8e/0xd0
[ 223.864420] warn_alloc.cold.90+0x8a/0x1b2
[ 223.864990] ? zone_watermark_ok_safe+0x300/0x300
[ 223.865626] ? slab_free_freelist_hook+0x85/0x1a0
[ 223.866264] ? __get_vm_area_node+0x240/0x2c0
[ 223.866858] ? kfree+0xdd/0x570
[ 223.867309] ? kmem_cache_alloc_node_trace+0x157/0x230
[ 223.868028] ? notifier_call_chain+0x90/0x160
[ 223.868625] __vmalloc_node_range+0x465/0x840
[ 223.869230] ? mark_held_locks+0xb7/0x120

Fix it by making sure that find_vmap_lowest_match() returns lowest
start address with any given alignment value, i.e. for alignments
bigger then PAGE_SIZE the algorithm rolls back toward parent nodes
checking right sub-trees if the most left free block did not fit
due to alignment overhead.

Fixes: 68ad4a330433 ("mm/vmalloc.c: keep track of free blocks for vmap allocation")
Reported-by: Ping Fang <pifang@xxxxxxxxxx>
Tested-by: David Hildenbrand <david@xxxxxxxxxx>
Reviewed-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
---
mm/vmalloc.c | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 48e717626e94..9cce45dbdee0 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1195,18 +1195,14 @@ find_vmap_lowest_match(unsigned long size,
{
struct vmap_area *va;
struct rb_node *node;
- unsigned long length;

/* Start from the root. */
node = free_vmap_area_root.rb_node;

- /* Adjust the search size for alignment overhead. */
- length = size + align - 1;
-
while (node) {
va = rb_entry(node, struct vmap_area, rb_node);

- if (get_subtree_max_size(node->rb_left) >= length &&
+ if (get_subtree_max_size(node->rb_left) >= size &&
vstart < va->va_start) {
node = node->rb_left;
} else {
@@ -1216,9 +1212,9 @@ find_vmap_lowest_match(unsigned long size,
/*
* Does not make sense to go deeper towards the right
* sub-tree if it does not have a free block that is
- * equal or bigger to the requested search length.
+ * equal or bigger to the requested search size.
*/
- if (get_subtree_max_size(node->rb_right) >= length) {
+ if (get_subtree_max_size(node->rb_right) >= size) {
node = node->rb_right;
continue;
}
@@ -1226,15 +1222,23 @@ find_vmap_lowest_match(unsigned long size,
/*
* OK. We roll back and find the first right sub-tree,
* that will satisfy the search criteria. It can happen
- * only once due to "vstart" restriction.
+ * due to "vstart" restriction or an alignment overhead
+ * that is bigger then PAGE_SIZE.
*/
while ((node = rb_parent(node))) {
va = rb_entry(node, struct vmap_area, rb_node);
if (is_within_this_va(va, size, align, vstart))
return va;

- if (get_subtree_max_size(node->rb_right) >= length &&
+ if (get_subtree_max_size(node->rb_right) >= size &&
vstart <= va->va_start) {
+ /*
+ * Shift the vstart forward. Please note, we update it with
+ * parent's start address adding "1" because we do not want
+ * to enter same sub-tree after it has already been checked
+ * and no suitable free block found there.
+ */
+ vstart = va->va_start + 1;
node = node->rb_right;
break;
}
--
2.20.1