Re: [BUG] SLOB's krealloc() seems bust

From: Matt Mackall
Date: Tue Oct 07 2008 - 19:12:44 EST



On Tue, 2008-10-07 at 20:31 +0300, Pekka Enberg wrote:
> Hi Matt,
>
> On Tue, Oct 7, 2008 at 8:13 PM, Matt Mackall <mpm@xxxxxxxxxxx> wrote:
> >> > @@ -515,7 +515,7 @@
> >> >
> >> > sp = (struct slob_page *)virt_to_page(block);
> >> > if (slob_page(sp))
> >> > - return ((slob_t *)block - 1)->units + SLOB_UNIT;
> >> > + return (((slob_t *)block - 1)->units - 1) * SLOB_UNIT;
> >>
> >> Hmm. I don't understand why we do the "minus one" thing here. Aren't
> >> we underestimating the size now?
> >
> > The first -1 takes us to the object header in front of the object
> > pointer. The second -1 subtracts out the size of the header.
> >
> > But it's entirely possible I'm off by one, so I'll double-check. Nick?
>
> Yeah, I was referring to the second subtraction. Looking at
> slob_page_alloc(), for example, we compare the return value of
> slob_units() to SLOB_UNITS(size), so I don't think we count the header
> in ->units. I mean, we ought to be seeing the subtraction elsewhere in
> the code as well, no?

Ok, I've looked a bit closer at it and I think we need a different fix.

The underlying allocator, slob_alloc, takes a size in bytes and returns
an object of that size, with the first word containing the number of
slob_t units.

kmalloc calls slob_alloc after adding on some space for header and
architecture padding. This space is not necessarily 1 slob unit:

unsigned int *m;
int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
...
m = slob_alloc(size + align, gfp, align, node);
*m = size;
return (void *)m + align;

Note that we overwrite the header with our own size -in bytes-.
kfree does the reverse:

int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
unsigned int *m = (unsigned int *)(block - align);
slob_free(m, *m + align);

That second line is locating the kmalloc header. All looks good.

The MINALIGN business was introduced by Nick with:

slob: improved alignment handling

but seems to have missed ksize, which should now be doing the following
to match:

diff -r 5e32b09a1b2b mm/slob.c
--- a/mm/slob.c Fri Oct 03 14:04:43 2008 -0500
+++ b/mm/slob.c Tue Oct 07 18:05:15 2008 -0500
@@ -514,9 +514,11 @@
return 0;

sp = (struct slob_page *)virt_to_page(block);
- if (slob_page(sp))
- return ((slob_t *)block - 1)->units + SLOB_UNIT;
- else
+ if (slob_page(sp)) {
+ int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
+ unsigned int *m = (unsigned int *)(block - align);
+ return SLOB_UNITS(*m); /* round up */
+ } else
return sp->page.private;
}


That leaves the question of why this morning's patch worked at all,
given that it was based on how SLOB worked before Nick's patch. But I
haven't finished working through that. Peter, can I get you to test the
above?

--
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/