Re: [PATCH v1] mm/vmalloc: fix exact allocations with an alignment > 1

From: David Hildenbrand
Date: Thu Sep 23 2021 - 13:43:06 EST


On 22.09.21 12:41, Uladzislau Rezki wrote:
On Wed, Sep 22, 2021 at 10:34:55AM +0200, David Hildenbrand wrote:
No, that's leaking implementation details to the caller. And no, increasing
the range and eventually allocating something bigger (e.g., placing a huge
page where it might not have been possible) is not acceptable for KASAN.

If you're terribly unhappy with this patch,
Sorry to say but it simple does not make sense.


Let's agree to disagree.

find_vmap_lowest_match() is imprecise now and that's an issue for exact
allocations. We can either make it fully precise again (eventually degrading
allocation performance) or just special-case exact allocations to fix the
regression.

I decided to go the easy path and do the latter; I do agree that making
find_vmap_lowest_match() fully precise again might be preferred -- we could
have other allocations failing right now although there are still suitable
holes.

I briefly thought about performing the search in find_vmap_lowest_match()
twice. First, start the search without an extended range, and fallback to
the extended range if that search fails. Unfortunately, I think that still
won't make the function completely precise due to the way we might miss
searching some suitable subtrees.


please suggest something reasonable to fix exact allocations:
a) Fixes the KASAN use case.
b) Allows for automatic placement of huge pages for exact allocations.
c) Doesn't leak implementation details into the caller.

I am looking at it.

I am testing this:

<snip>
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index dcf23d16a308..cdf3bda6313d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1161,18 +1161,14 @@ find_vmap_lowest_match(unsigned long size,
{
struct vmap_area *va;
struct rb_node *node;
- unsigned long length;
/* Start from the root. */
node = free_vmap_area_root.rb_node;
- /* Adjust the search size for alignment overhead. */
- length = size + align - 1;
-
while (node) {
va = rb_entry(node, struct vmap_area, rb_node);
- if (get_subtree_max_size(node->rb_left) >= length &&
+ if (get_subtree_max_size(node->rb_left) >= size &&
vstart < va->va_start) {
node = node->rb_left;
} else {
@@ -1182,9 +1178,9 @@ find_vmap_lowest_match(unsigned long size,
/*
* Does not make sense to go deeper towards the right
* sub-tree if it does not have a free block that is
- * equal or bigger to the requested search length.
+ * equal or bigger to the requested search size.
*/
- if (get_subtree_max_size(node->rb_right) >= length) {
+ if (get_subtree_max_size(node->rb_right) >= size) {
node = node->rb_right;
continue;
}
@@ -1192,16 +1188,30 @@ find_vmap_lowest_match(unsigned long size,
/*
* OK. We roll back and find the first right sub-tree,
* that will satisfy the search criteria. It can happen
- * only once due to "vstart" restriction.
+ * due to "vstart" restriction or an alignment overhead.
*/
while ((node = rb_parent(node))) {
va = rb_entry(node, struct vmap_area, rb_node);
if (is_within_this_va(va, size, align, vstart))
return va;
- if (get_subtree_max_size(node->rb_right) >= length &&
+ if (get_subtree_max_size(node->rb_right) >= size &&
vstart <= va->va_start) {
+ /*
+ * Shift the vstart forward, so we do not loop over same
+ * sub-tree force and back. The aim is to continue tree
+ * scanning toward higher addresses cutting off previous
+ * ones.
+ *
+ * Please note we update vstart with parent's start address
+ * adding "1" because we do not want to enter same sub-tree
+ * one more time after it has already been inspected and no
+ * suitable free block found there.
+ */
+ vstart = va->va_start + 1;
node = node->rb_right;
+
+ /* Scan a sub-tree rooted at "node". */
break;
}
}
<snip>

so it handles any alignment and is accurate when it comes to searching the most
lowest free block when user wants to allocate with a special alignment value.

Could you please help and test the KASAN use case?


Sure, I'll give it a spin tomorrow! Thanks!


--
Thanks,

David / dhildenb